Part II – Kubernetes DevOps : Introduction to Helm

This is the second post in a multi-part blog series on Kubernetes DevOps using Azure. I am co-authoring this series with the help of my colleague at Microsoft, Daniel Selman. We recently worked on the Kubernetes project together and thought to share our learnings.

In the last post, you got to better understand the application that was going to be deployed in the Kubernetes cluster. In this post, you will learn about the tool called “Helm”.

Part I: Introduction to the Historic Events Microservice
Part II: Getting started with Helm
Part III: VSTS Build (Helm package + containerization of application)
Part IV: VSTS Release (using Helm)
Part V: Lessons Learned – When things go wrong!

So what is Helm?

Do you know how all things Kubernetes are named after nautical terms? This really isn’t any different.
Helm is a package manager for Kubernetes and is analogous to Apt-Get for Linux environments. It is made up of two components: Tiller which is the server-side component, and Helm which is the client-side component. Helm packages are known as charts and by default use a public chart repository. However, they can be configured to use a private repository (like Azure blob storage). Helm charts are written in a mix of YAML and Go Templating Syntax.

image001
Source: https://www.slideshare.net/alexLM/helm-application-deployment-management-for-kubernetes

Helm can be used to empower your dev-ops workflows in two distinct ways. First, it allows for the parameterization of YAML files for K8s deployments. This means that many people can utilize YAML from a shared source without modifying the file itself. Instead, they can pass their individual values at runtime (e.g. a username for a configmap).
For example, to deploy and configure the MySQL Helm Chart you would run the following command:

helm install --name my-release stable/mysql

No more diving into the YAML to get your deployment up and running. Pretty convenient right?

Second, it provides a standardized way of distributing and implementing all the associated YAML for an application. Microservices are cool (minimizing dependencies makes everyone’s lives easier), but they also result in many different containers being necessary to get an application running. Kubernetes augments this sprawl by introducing additional constructs that need to be defined (services, configmaps, secrets). As a result, even basic three tier applications can require almost a dozen k8s constructs (and likely a dozen different YAML files). Even someone who knows the application like the back of their hand likely wouldn’t know how and in what order to deploy these different files.

Helm handles that for you!.

Instead of running a dozen commands to deploy the different components of your application, you throw all your YAML into the templates folder of your chart (we’ll get to that later) and Helm will handle it for you.

image003

Quick note on the YAML we’re working with

A previous blog post went through the process of containerizing our history application. The purpose of this blog is to cover the helm piece of the puzzle but to give you an idea of what we are starting with from a vanilla YAML perspective.
We’ve got four files total for the application- asp-web-dep, asp-web-svc, node-api-dep, node api-svc. All of the containers are being pulled from the Azure Container Registry. I’ll include the four files here for reference.

asp-web-dep.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspcoreweb-dep
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: aspcoreweb
        tier: frontend
        track: stable
    spec:
      containers:
        - name: demowebapp
          image: "rzdockerregistry.azurecr.io/aspcoreweb:BuildNumber"
          ports:
            - name: http
              containerPort: 80
      imagePullSecrets:
        - name: sec

asp-web-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: aspcoreweb-svc
spec:
  selector:
    app: aspcoreweb
    tier: frontend
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80
  type: LoadBalancer

node-api-dep.yaml

kind: Deployment
metadata:
  name: nodeapi-dep
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nodeapi
        tier: backend
        track: stable
    spec:
      containers:
        - name: nodeapi
          image: "rzdockerregistry.azurecr.io/nodeapi:BuildNumber"
          env:
            - name: url
              value: https://rzshared.blob.core.windows.net/data
          ports:
            - name: http
              containerPort: 8080
      imagePullSecrets:
        - name: sec

node-api-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: nodeapi-dep
spec:
  selector:
    app: nodeapi
    tier: backend
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Lets make a chart

If you haven’t yet, get kubectl and helm installed on your machine and have your kubectl configured to point at a Kubernetes Cluster (we’ll be using AKS, which you can get started with here). Helm uses your kube config so it should play nice with your cluster out of the box.
Helm at the time of writing this requires Tiller, the server-side component. Run the following command to initialize tiller on your cluster:

	helm init	

Next, let’s scaffold a chart. When you run a simple Helm create [name] command it will create a basic Nginx chart, which we will replace with the components of our application. First, run the helm create command:

helm create [chart_name]

This will create a new directory with all the elements of the helm chart.

This blog isn’t going to cover all the elements of a Helm chart, but instead focus the templates folder and the values.yaml file. The templates folder is where your YAML will be placed. Its currently populated by the nginx files, so you’ll want to delete all of the content in this folder and replace it with the yaml for your application.

Similarly, delete the content (not the file) of values.yaml. Let’s start from a blank slate and replace it with the following values. The buildNumber will be used later on for the VSTS Pipeline and the imagePullSecret will be used to specify the… well imagePullSecret. Don’t worry about the specific values as these can be updated later on.

buildNumber: BuildNumber
imagePullSecret: acr 

We will make one modification to the YAML files, however. Under the hood, helm has a “Release” object which contains information about the deployment of the helm chart. Specifically, release.name provides a unique identifier for your chart so that you can deploy one chart many times to a cluster without errors associated with overlapping names. We’ve added in a reference to the release name attribute in each of the yaml files as such:

  name: {{ .Release.Name }}-aspcoreweb-dep
  name: {{ .Release.Name }}-aspcoreweb-svc
  name: {{ .Release.Name }}-nodeapi-dep
  name: {{ .Release.Name }}-nodeapi-dep

Lets recap. We’ve initialized tiller on our cluster, scaffolded a helm chart, and threw our (mostly) vanilla YAML files in the templates folder.
Our last step is to package it up for ease of distribution. Navigate to the directory base directory of your helm chart and run the following command:

 helm package

Now your chart can be distributed and installed on your cluster using helm install:

helm install [chart_name]

Now that we have some familiarity with the application, Kubernetes, and helm, we are going to transition to VSTS to handle the Build and Release process from code to chart deployment over the next few blog posts, so make sure to check back as we continue this series.

Part I – Kubernetes DevOps : Introduction to the Historic Events Microservice

This is the first post in a multi-part blog series on Kubernetes DevOps using Azure. I am co-authoring this series with the help of my colleague at Microsoft, Daniel Selman. We recently worked on the Kubernetes project together and thought to share out learnings.

Anyways, below is a high-level structure of the blog posts we are planning to publish:

Part I: Introduction to the Historic Events Microservice
Part II: Getting started with Helm
Part III: VSTS Build (Helm package + containerization of application)
Part IV: VSTS Release (using Helm)
Part V: Lessons Learned – When things go wrong!

We do assume that you have basic knowledge of K8s and Docker containers, as we don’t really cover the basics of either of those in this blog series.

Software/Services

Following is the list of software you want to install on your machine.

• Kubectl
• Helm
• Docker
• Minikube (optional, only needed for local testing)
• Git
• Azure CLI

If you like to use a script to install this software on a Linux VM (tested on Ubuntu 16.04), you can download it here: https://github.com/razi-rais/microservices/blob/master/reference-material/install-k8s-lab-software.sh

On the services side, we will be using Azure AKS and VSTS. In case you don’t have Azure subscription you can get yourself Azure trial for free here: https://azure.microsoft.com/en-us/offers/ms-azr-0044p

Alright, so for the demonstration purposes, we have created a simple Historic Events microservice. We thought it won’t hurt to throw some history while working on modern technologies!

Overview

From a technical perspective, we have a microservice that serves the UI which is written in ASP.NET Core 2.0. It pulls data by talking to various RESTful endpoints exposed by Node JS API that is served by another microservice. The actual content storing the details about historic
events), that is served by API is stored in various JSON files, that are persisted as a blog on Azure Storage.
In a nutshell, from an end user standpoint the web app home page looks like below:

image001

When a user wants to learn more about a particular historic event, they can either select particular historic event from the top menu, or they can simply click on the description of a particular event provided on the home page.

For example, the French Revolution event page is shown below. All event details pages follow similar table based layout to list critical events.

image003

Code Walkthrough

The code and all relevant artifacts are available on GitHub: https://github.com/razi-rais/aks-helm-sample

image005

This is a plain vanilla ASP.NET Core 2.0 web application.

HistoricEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/Event.cs#L8) define a basic entity, that represents an event object. The actual attributes are date and description of historic event.

image007

Most of the actual work happens inside the HomeConroller, which provides methods to connect to backend api service and fetch the data.

The GetEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L38) method takes a url of an endpoint as a parameter. It then connects to the url endpoint and read the content as a string asynchronously but ultimately converting it into JSON objects stored in a List of type HistoricEvent. Finally, it returns the List object containing all the events.

image009

If you are wondering who call GetEvent it is inside the method called Event. (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L54)

image011

The is basically an action tied to the View. The parameter id essentially acts as a key referring to the event we are interested to fetch from the backend service (e.g. ww2, ww1 etc). The method itself is trivial and we have left most of the optimization out. It does the bare minimum at the moment of printing on the console which endpoint its going to connect and port at the moment is set to 8080. Finally, it calls GetEvent to return the HistoricEvent objects stored in the List and send them back as a View.

The Event.cshtml (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Views/Home/Event.cshtml) View presents the list of events in a table format.

image013

Data Api (NodeJS)

The backend service code is placed inside NodeJSApi folder
image015

The server.js runs the server that listens to port 8080.
Since the actual files containing the event data are stored on Azure Blob Storage, we set the URL variable to the blob storage endpoint, which is passed through an environment variable.

Let’s take a look at the endpoint that returns ww1 (World War 1) related events (https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/server.js#L22). First, it connects to the URL, which points to the Azure Blob file e.g. (https://name. blob.core.windows.net/data/ww1) and then it reads the relevant JSON file (e.g. ww1.json). We do check to see if the status is 200, meaning the file is pulled from the blob, in which case the content of the response is set to the JSON.

image017

Historic Events JSON Files

All the data related to various historic events is available in the JSON file format. You can find the link of each of the historic event JSON file below.

 

NOTE: Azure blob storage requires file names to be in the lower case.

 

Name Description URL
frenchrevolution French Revolution https://github.com/razi-rais/aks-helm-sample/blob/master/data/frenchrevolution.json
renaissance Renaissance https://github.com/razi-rais/aks-helm-sample/blob/master/data/renaissance.json
ww1 World War I https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww1.json
ww2 World War II https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww2.json

Docker Files

Both the front end and back end service are packaged as Docker Linux container image.

1. Frontend UI: https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Dockerfile

2. Backend API: https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/Dockerfile

Building and Running an Auditing Solution on Blockchain

On 21st February, I will be conducting an event at Microsoft NYC campus on building and running a fully functional blockchain based audit trail application.

The first half is a good fit for both business and technical audiences, as it covers auditing scenarios using blockchain. The latter half will showcase an open source project that provides tracking of Wikipedia change logs using blockchain.

I will do a deep dive into the running solution that leverages the Ethereum Rinkeby Network. I will showcasing open source project
Wikipedia logs change tracking” that I am currently working on.

Session Summary

  • 6:30 PM | Overview of auditing capabilities of blockchain
  • 7 PM – 9 PM | Project Showcase – Tracking/Auditing Changes from Wikipedia Logs.
  • Q&A + Demos
  • Developers are encouraged to bring their laptops running Mac OS or Windows 10 (or Windows Server 2016). Instructions to setup the project will be provided during the session.

    Building and Running an Auditing Solution on Blockchain

    Wednesday, Feb 21, 2018, 6:30 PM

    Microsoft
    11 Times Square New York, NY

    53 Members Went

    • What we’ll do In this session, you will learn about how to build and run a fully functional blockchain based audit trail application. As usual, everyone is welcome. The first half is a good fit for both business and technical audiences, as it covers auditing scenarios using blockchain. The latter half will showcase an open source project that pro…

    Check out this Meetup →

    Understanding R3 Corda and Running it on Azure

    R3 Corda is a blockchain-inspired distributed ledger technology (DLT) from R3 that is specifically designed for financial and regulated transactions, and emphasizes privacy and security between participants. While it is generally available for download as open source code (corda.net), R3 also makes it available on the Azure platform and has plans to integrate Corda with numerous Azure capabilities.

    R3 recently secured 107 million USD investment including SBI Group, Bank of America Merrill Lynch, HSBC, Intel & Temasek. R3 has globally diverse group of investors represents an equal geographical split across Europe, Asia-Pacific and the Americas, counting over 40 participants from over 15 countries.

    • Banco Bradesco
    • Bangkok Bank
    • Bank of America Merrill Lynch
    • Bank of Montreal
    • Bank of New York Mellon
    • Barclays
    • BBVA
    • BNP Paribas
    • B3 (BM&FBOVESPA and Cetip)
    • Canadian Imperial Bank of Commerce
    • Citi
    • Commerzbank
    • Commonwealth Bank of Australia
    • Credit Suisse
    • CTBC Financial Holding
    • Daiwa Securities Group
    • Danske Bank
    • Deutsche Bank
    • HSBC
    • ING
    • Intel Capital
    • Intesa Sanpaolo [2]
    • Itaú Unibanco S.A.
    • Mitsubishi UFJ Financial Group (MUFG)
    • Mizuho
    • Natixis
    • Nomura
    • Nordea Bank
    • Northern Trust
    • OP Cooperative
    • Ping An
    • Royal Bank of Canada
    • SBI Group
    • SEB
    • Societe Generale
    • Sumitomo Mitsui Banking Corporation
    • TD Bank Group
    • Temasek
    • The Bank of Nova Scotia
    • The Royal Bank of Scotland
    • U.S. Bank
    • UBS AG
    • Wells Fargo
    • Westpac

    As demand of R3 Corda is increasing and Microsoft supports running it on Azure through Azure MarketPlace  I decided to have a discussion about R3 Corda during our next NYC Azure User Group meeting in October.

    To talk about R3 Corda and their partnership with Azure I have invited Tom Menner (Director and Solutions Architect at R3) to deliver a talk for my NYC Azure User Group. Since we are based in Manhattan we had significant amount of users associated with financial companies and based on their feedback this session certainly resonated with them.

    Tom predominantly covered following topics:

    • Understand what Corda is and how it differs from blockchain platforms such as Ethereum and Hyperledger Fabric;
    • Use cases of Corda
    • Corda on Azure and R3’s partnership with Microsoft

    If you like to view or download the slides used during the session I have made them available on SlideShare.

    Creating Developer’s Docker Linux Virtual Machine on Azure


    For an upcoming developer event on Docker I had to create handful of Linux Ubuntu virtual machines on Azure with Docker and few additional software installed on it.

    I looked into couple of ways to to do that on Azure in a consistent fashion. The first option was to use DevTest labs and use artifacts. Another option is to use Custom Extensions. There are other options including creating your own base virtual machine image with all the software installed and then upload it on Azure. I picked custom extension approach mainly because its the simplest approach and I knew the software that I needed to install won’t take more than ~5 minutes on average. It also has a reasonable tradeoff (speed of deployment versus managing your own virtual machine image etc.)

    Anyways, the actual process to leverage custom extensions are rather straightforward. Create the scripts. Create the scripts and then call them in your ARM Template (which is a JSON file).

    Here is the complete list of software. I choose to use Ubuntu 16.04 LTS Azure Virtual Machine image so that wasn’t needed to be installed.

    • Docker (Engine & Client)
    • Git
    • Nodejs
    • Dotnetcore
    • Yeoman
    • Bower
    • Azure Command Line Interface (CLI)

    The approach I took was to create a single script file  for each one of them to keep things simple and clean.

    2017-07-22_14-13-45

    Once done with the scripts all I need to do is reference/call the install.sh script from the custom extension. Take a look at it on at line 211 in JSON.

    If you like to look at the code artifacts I have made them available at Git repo. You can also simply try out creating a virtual machine by single clicking “Deploy on Azure” button. You do need an active Azure subscription before you can deploy virtual machine on Azure.

    2017-07-22_14-29-56

    Event Announcement “Blockchain 101 – Introduction for Developers”


    Some of you may already be aware that I host NYC MS Cloud User Group technology meet up every month at Microsoft Manhattan campus.This month, I will be hosting/presenting alongside with my colleague Cale Teeteron blockchain.I did a similar session earlier this year in January and turnout was great and based on feedback doing another session in July.

    Here is the brief agenda:

    • Learn basics of blockchain. What exactly is a block? How blocks are created? What are transactions?
    • Understand what is a transaction and role of mining.
    • Learn what are smart contracts and how to write them in solidity.
    • Demos (mostly based on Ethereum but will talk about other chains too as its important to understand the overall landscape)

    Blockchain 101 – Introduction for Developers

    Monday, Jul 31, 2017, 6:30 PM

    Location details are available to members only.

    62 Members Attending

    Hello everyone!Excited to announce first session for summer and its on blockchain! (again considering the demand)Here is what Gartner predicts about blockchain:·  By 2022, at least one innovative business built on blockchain technology will be worth $10 billion.·  By 2030, 30% of the global customer base will be made up of things, and those thi…

    Check out this Meetup →

     

    DevOps with Containers

    Recently I did a video series for Microsoft Channel9 on DevOps with Containers (thanks to Lex Thomas and Chris Caldwell for recording these). The idea was simple- show and tell how container technology can help in improving the DevOps experience.

    It’s a ~2-hour long recording (divided into three parts for easy viewing) covers topics including containerization of applications, continuous integration and deployment of containerized applications using Visual Studio Team Services, Azure Container Services, Docker Swarm, DC/OS and monitoring containers using Operations Management Suite and 3rd party tools.

    Here is the break down of each session. If you’re interested in looking at the sample application that I have deployed in the last session (asp net core web app and ape) its available on my Git repo.

    Part 1 – Getting Started with Containers

    In the first part the focus is to introduce the basic concepts of containers and the process of application containerization. I did target Windows Containers in this part though later parts do show how to leverage multi-container applications based on ASP.NET Core using Linux container. If you wanted to try Windows Containers I have provided this link that will allow you to automatically provision Windows Server 2016 Virtual Machine with containers support (including docker-compose). Also, the Azure ARM Template that provisions the virtual machine is available here.

    • [2:01] What is a Container and how can it benefit organizations?
    • [5:20DEMO: Windows Containers 101- Basics and Overview
    • [9:33DEMO: How to create a Container on Nano Server
    • [15:39DEMO: Windows Server Core and Containers
    • [19:36DEMO: How to containerize legacy ASP.NET 4.5 application
    • [43:48DEMO: Running  Microsoft SQL Server Express inside Container

    Part 2 – Building CI/CD pipeline with VSTS and Azure Container Service

    The second part focuses on building a Continuous Integration (CI) and Continuous Deployment (CD) pipeline for multi-container applications using Visual Studio Team Services (VSTS) with deployment target of Azure Container Service (ACS) hosting DC/OS and Docker Swarm.

    I developed a sample application that represents a canonical web app and ape (in this case I used ASP.NET Core 1.1 but really can be NodeJS, Python , Java etc.). Then demos show workflow that starts by submitting code along with Dockerfile and docker-compose that actually will be used by VSTS build to create a new container image every time build is run {container name:buildnumber} format. Containers are hosted in Azure Container Registry which is a private DTR (docker trusted registry). After container image is ready the continuous deployment happens and VSTS kicks off the release which targets both DC/OS and Docker Swarm that are actually hosted on Azure Container Service (ACS).

    • [2:54] The Big Picture – Making DevOps successful
    • [6:34DEMO: Building a Continuous Integration and Continuous Deployment system with Azure Container Service and Visual Studio Team System
      • Multi-Container Application | ASP.NET Core
      • Container Images Storage | Azure Private Docker Registry
      • Build & Release Deployment | Visual Studio Team System

    Part 3 (Final) – Monitoring and Analytics

    This is the final part which focuses on doing Monitoring and Analytics of container applications running on Azure Container Service. Microsoft Operations Management Suite (OMS) is the primary service used in the demos but I did mention 3rd party services that are supported on Azure Container Service and provide monitoring, analytics and debugging functionality

    • [3:20] Does Orchestration = Containers?
    • [5:40] DEMO: Monitoring and Analytics

    Final Thoughts

    Containers are a massively useful technology for both Green Field and Brown field based application development. Also, organizations today have various levels of maturity when it comes to DevOps and containers provide them with a great option to enable DevOps in an effective way. Off course there are considerations like learning curve, lack of proven practices and reference architectures compared to traditional technologies. However, this is going to be lesser concern as with time, the knowledge gap is going to be filled and reference architectures will emerge.

    Finally, you should also broaden your design choices to include a combination of containers with server less computing (e.g. Azure Function which actually runs inside a container itself!). This is a particularly interesting option when your service is mainly stateless. This is something I would like to cover in future blog post.

    First Look Into Blockchain

    Since last year, I have been spending time with customers understanding how blockchain can help them improve/replace existing processess. Its relatively a new technology but evolving very fast. Anyways, recorded an hour long video session  First Look Into Blockchain for Channel9. Its predominately focus on blockchain from developers perspective.

    • [0:57] What is Blockchain?
    • [2:14] How is this different than a standard distributed database?
    • [5:16] DEMO: Introduction and Overview of Blockchain in a Dev/Test lab on Azure
    • [30:40] DEMO: Blockchain and .NET

    Using Client Certificate Authentication for Web API Hosted in Azure

    During recent customer engagement there was a discussion around client certificate [a.k.a tls mutual] authentication and how to use it with asp.net web api that is hosted on azure as a azure api app. Apparently there is an article that covers this topic for web apps hosted in azure but it cannot be used as-is for web api as there are some differences on how to get the certificate inside a web api versus web app. I also notice that it does not disucss how to actually make a call to web api from a client app e.g. console or web application etc. and actually demonstrate how to pass client certificate as part of the request. This post is going to cover exactly these two topics:

    1. Demonstrate how to capture a client certificate inside the web api hosted on azure as a azure api app.

    2. Demostarate client application making a call to secure web api  by passing a client certificate in the header.  We will use both a browser and a sample console app but principal remains same and can be applied to other type of applications including a web application.

    On a side note if you are thinking why on earth you will need to use client certificate based authentication today you’re not alone. In fact this stackexchange thread presents a good infromation on the topic.

    What are Client Certificates?

    Lets get some very basic information out of our way about client certificates.  Will trying to keep it as short and concise as possible. If you already aware of what they and how they work then you may want to skip this section.

    Client certificate are essentially of type X509 certificate. They are use by a client which can be user or application to establish its identity. Example of client application can be a windows service, console application or even a web application running on a web server. You may ask how client certificate is different from server certificate and why can’t we just use server certificate for both purposes? Actually you can and its technically possible to have a certificate that can serve both as client and server certificate but practically that usage is less common. This actually can be explain when you understand indented use case for both type of certificates. Server certificate is used by the server to tell the client that the identity of the accessed system [e.g. a website] is actually what it says it is and should be trusted. Thats why inside the server certificate you find attributes that are related to for example a domain name that web site is hosted on [www.contoso.com or *.contoso.com]. When browser do all the tls based handshake behind the scene it does verify everything it has to to make sure the certificate can be trusted. Thats why browsers have already store many trusted certificates from sources like Godaddy, VeriSign etc. The Client certificates does similar thing for clients by telling a server that “I am giving you this certificate and you should trust me with it”. Server actually has to do similar activities as browser does with server certificate to ensure that certificate presented by the client is not spoofed, temper with and actually is a valid certificate.

    To sum up, following are they key points to remember related to client certifiactes:

    1. Client certifiacte are different from server certificate in their indented usage.

    2. Client certificates need both public and private key to work. This is important consideration particularly when you are troubleshooting a scenario when server is consistently declining a certificate presented to it.

    3.  How do I determine if certficate I got is a client or server certificate? Its rather simple. Follow the following steps:

    • Locate the certificate and double click to open it.
    • On general tab under certificate information section for the client certificate it will say “Proves the identity of a remote computer”.  Alternatively for server certificate it will say “Ensures the identity of a remote computer“. Pay attention to work prove versus ensure!
    • On Details tab scroll until you see field “Enhanced Key Usage”. For  the client certificate the value of this field is “Client Authentication (1.3.6.1.5.5.7.3.2)” and for server certificate its “Server Authentication (1.3.6.1.5.5.7.3.1)”

    Web Api & Client App Code Download

    Go ahead and download the complete source code including visual studio projects from Github repo: AzureWebApiClientCertAuthSample

    1. WebApiWithClientCertAuth: This is a web api project that reads client certificate from incoming request. You also publish the web api to azure as a api app.
    2. WebApiClient: This is a .net console application that call the web api in #1. It also pass client certificate as part of the api request.
    3. Certificate: Not a project but a folder containing self signed client certificate as a .pfx file.
    4. AzureWebApiClientCertAuthSample: Visual studio solution file that aggregates all of the above.

    Install Self Signed Certificate

    I choose to install the self signed certificate in the certificate store on the local machine. You can do the same by opening up the mmc console [type mmc in run windows and press enter].

    • From file menu select add/remote snap-ins
    • Select certificates from the snap-in list
    • Choose computer account and press next
    • Press finish and then press ok
    • Expand certificates node and then expand personal folder
    • Right click the certificates node and select all tasks option and then select import
    • Press next and locate the certificate file and press next
    • Provide the password [use password.txt file available as part of download inside certificate folder]
    • Keep pressing next and finally press finish
    • You should now have a client certificate ready to be used

    If you want to create your own self signed certificate then you can use the new-selfsignedcertifacte poweshell cmdlet. Please note that although this cmdlet is available in different versions of windows server platform the -textextension parameter is only available in windows server 2016 [currently in technical preview 4]

    New-SelfSignedCertificate -Type Custom -Subject "CN=DO_NOT_USE_IN_PRODUCTION" -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=joe@contoso.com") -KeyUsage DigitalSignature -KeyAlgorithm RSA -KeyLength 2048 -CertStoreLocation "Cert:\LocalMachine\My" 

    Web API & Client Certificate Authentication

    I basically created a plain vanilla asp.net 4.5 web api project using visual studio 2015. If you want to create you own then it should work too. Just pay attention to code that I’ve added which is really minimal but important. Other than that there is nothing really special about the visual studio project.

    Lets quickly run the application locally by pressing f5. You should see browser navigating to http://localhost:3959 and presenting you the usual default asp.net page. Lets make sure everything else is working as expected by browsing to following url:

    http://localhost:3959/api/values

    You should get the json response similar to following.

    ["Connection : Keep-Alive","Accept : text/html","Accept-Encoding : gzip","Accept-Language : en-US","Host : localhost:3959","User-Agent : Mozilla/5.0"] 

    If you’re using internet explorer the default behavior is that it may just download the file but for chrome and firefox they typically show the json in the browser itself. So, what’s exactly going on? Lets see on the code to figure it out. Open the value controller.cs file present under controller folder inside the web api project.

     public IEnumerable<string> Get()
            {
                ProcessClientCertificate pCert = new ProcessClientCertificate();
                System.Net.Http.Headers.HttpRequestHeaders headers = this.Request.Headers;
                List<string> lst = new List&lt;string&gt;();
                foreach (var header in headers)
                {
                    if (headers.Contains(header.Key))
                    {
                        string token = headers.GetValues(header.Key).First();
                        if (!string.IsNullOrEmpty(token))
                        {
                            lst.Add(header.Key + " : " + token);
                        }
                     }
    
                    else
                    {
                        lst.Add(header.Key + " : No value ");
                    }
                }
    
               if (headers.Contains("X-ARR-ClientCert"))
                {
                    string token = headers.GetValues("X-ARR-ClientCert").First();
                    X509Certificate2 cert = pCert.GetClientCertificateFromHeader(token);
                    return new string[] { cert.Thumbprint, cert.Issuer };
                }
    
                return lst.ToArray<string>();  
            }
    

    This was the method that was executed and the reason behind the retuned json. It start by creating an object of custom class to handle the incoming cert [more on it in a moment] but after it get the cert it gather all the headers that came as part of incomming request. These are http headers and are represented by type

    System.Net.Http.Headers.HttpRequestHeaders 

    Why we need headers? because the certificate will be inside a header with the key

    X-ARR-ClientCert

    Also worth mentioning that certificate is base64 encoded.

    I just put them inside a list object with a key and value of every header separated by a colon character per list item. If you notice the json you received it was exactly that.But where is the client certificate part? Notice on line 23 there is a if condition checking for the header that should be carrying client certificate. And if it does find it then it goes and call

    GetClientCertificateFromHeader

    method which takes the takes the base64 encoded client certificate value coming out of header as an input. The actual code is located inside file

    //ProcessClientCertificate.cs
    public X509Certificate2 GetClientCertificateFromHeader(string certHeader)
    {
       X509Certificate2 certificate = null;
       if (!String.IsNullOrEmpty(certHeader))
       {
          byte[] clientCertBytes = Convert.FromBase64String(certHeader);
          certificate = new X509Certificate2(clientCertBytes);
       }
       return certificate;
    }
    
    

    It basically read the base64 string then converts it to bytes array and creates a certificate of type  [chsarp] X509Certifacte2[/csharp]. This certificate has all the attributes that you expect in a x509 cert including subject, issuer, thumbprint etc.

    Going back to the actual api method notice that in case of certificate is found in the header it simply return the string array with certificate thumbprint and its issuer. But you never got either of these in the json. Why is that? lets cover that next.

    Publishing Web API to Azure & Enabling Client Certificate Authentication

    Till this point everything was running locally because visual studio is hosting the web api on iis express. In order for client authentication to work following needs to happen:

    1. Client and server must establish tls channel
    2. Client need to send the client certificate
    3. Server need to read the client certificate

    For #1 host the api on a website that has a support of https. With azure api app that happens by default. #2 is simple enough too, you can use any modern browser like internet export, chrome, firefox etc that has built in support for presenting you with an option to choose a client certificate present on the machine and then send that to server over tls channel. Finally, #3 we already got it covered. Remember the part where we read the certificate from the header and it was in base64 encoded.

    Just to add few points: Depending on your hosting preference you may not always get the certificate inside X-ARR-ClientCert header. For azure api app hosting web api its inside that header because thats where underlying platform using application request routing places it. Typically for web api you should use method

    Request.GetRequestContext().ClientCertificate

    to access client certificate.

    Let’s go ahead and publish the web api to azure platform as api app. I won’t cover the actual steps about how to publish it as they are very well documented. Do take a note of location where your api app is located and its resource group. For my deployment location was “East US 2” and resource group was “WebApiGroup”.  We will need this information momentarily.

    [NOTE] Its absolutely critical that you choose a paid app service plan for api app. By default it is set to free plan and that does not allow you to enable client certificate authentication! You may want to consult this article for more details.Also, you can change the app service plan after deployment. At this point you should have app service plan updated to something other than free.

    We should now do a quick test to see how the web api is doing by visit url:

    http://[YOUR WEB SITE NAME].azurewebsites.net/api/values
    

    The response will be a json and actually should be similar to json that you receive earlier when tested the api locally. It should have headers but nothing related to certificate. I promise, on last setting and we are all set for client authentication.

    We now enable client certificate authentication on published web api. At the time of writing it is bit more tedious than it should be because there is no provision for it in the azure portal or azure powershell cmdlets. You have atleast two options to make it happen. You can choose whichever makes more sense to you but option 1 is simple and does not require any installation so its a zero touch solution.

    Option 1:  The way I did it is by using azure resource explorer which provide simple user interface to read/update the properties of a azure resource. Here are are steps you need to follow:

    • After login to the azure resource explorer change the default option from the top pane from “Read” to “Read/Write”
    • From left hand side tree view expand the subscription node and then expand  resourceGroups node
    • Expand the resource group which contains the web api [e.g. webApiGroup] you like enable client certificate authentication
    • Expand “Microsoft.Web” and then expand “sites” node
    • Select “Edit” option present under data tab located next to “GET” option.
    • Notice the right hand side pane has a json listing. Locate attribute “clientCertEnabled” and double click on value “false” to select it
    • Change value from “false” to “true”
    • Save the changes by pressing “PUT” button.

    Finally the json should look like similar to the screen shot below. Pay attention to line 81 showing clentcertenabled attribute with value true. Click on the image to view it in full size.

    web site client certificate property

    Option 2: If for some reason you don’t want to use azure resource explorer then you can use a tool like arm client. You do have to install it first and once you’re finished create a json file like the one shown below and save it on the disk. I save it at c:\files\input.json

    { "location": "East US 2","properties":{"clientCertEnabled": true } }

    Remember we noted location of our web app earlier, this is where we need it.

    Finally prepare the command for the arm client tool:

    armclient PUT subscriptions/{Subscription Id}/resourcegroups/{Resource Group Name}/providers/Microsoft.Web/sites/{Website Name}?api-version=2015-04-01 @c:\files\input.json -verbose
    

    You will need to provide the values inside the braces. You can simply gather them by going to azure portal and then select the web app and then expand the blade and select the resource group property link. Now look at the url and notice that it basically has values for all the fields you need for the command above. Copy them form url and paste them into their respectively places inside the command above.

    Finally run the command and make sure that it complete successfully.

    Testing Client Certificate Authentication Using Browser

    You must be impatiently waiting to test this all out. I can’t blame you so lets jump right into it. I will demonstrate the call using internet explorer [v11] but also have tested the process sucessfully using chrome[v48.0.2564.116] and firefox[44.0.2]. Also, I have all of them running on windows server 2012 r2.

    1. Launch the internet explorer

    • Goto settings and then select internet options
    • From the content tab select certificates
    • Press import and then select the client certificate [you have downloaded it earlier]. if you don’t see the certificate then make sure you have selected “all files” option from the dropdown on bottom right.
    • Press next and provide the password
    • Keep pressing next till you see finish and then press it to complete the import process
    • You should see import was successful message.
    • Close all the dialogs

    2. Browse to the web app using internet explorer. Note that you are using  https and not http

    https://[YOUR WEB SITE NAME].azurewebsites.net/api/values

    You should now see a windows security dialog asking you to select the client certificate so browser can send it to the web app. Select the one you imported earlier as highlighted  in the  figure below. Click on the image to view it in full size.

    select client certificate

    You should now receive the json but this time it has thumbprint and name of the issuer in it!

     ["836D19627899F09F5EE60D71B2B9823201321E08","CN=DO_NOT_USE_IN_PRODUCTION"]

    Testing Client Certificate Authentication Using Console Application

    In previous section we already did a round of testing using the browser. What happen when instead of browser you like to make a call to the web api using an application like windows service, console application, task scheduler, web application etc. The process is generally same for all these applications. I’ve provided sample console application as part of download. Open the webapi-client project in visual studio. We now look into the code that call the web api programmatically and also pass client certificate as part of the request.

    Lets take a look at CallHttpsApi method that does most of the work. Its the only method that is called by main() method located inside program.cs file.

     static void CallHttpsApi()
            {
                Console.WriteLine("Enter full path to certificate file (.pfx)");
                string certFile = Console.ReadLine();
                Console.WriteLine("Enter password for the certificate");
                string certPassword = Console.ReadLine();
                Console.WriteLine();
    
                string baseWebUrl = "webapiwithclientcertauth.azurewebsites.net";
                string url = string.Format("https://{0}/api/values",baseWebUrl);
                HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
                req.ClientCertificates.Add(GetCertFromFile(certFile,certPassword));
                HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
                using (var readStream = new StreamReader(resp.GetResponseStream()))
                {
                    Console.WriteLine(readStream.ReadToEnd());
                }
    
                Console.ReadLine();
            }

    It start by asking the path to client certificate file in .pfx format followed by the password. Both the certificate and password are provided and inside certificate folder. Next create the url for the api to call.You need to replace the baseWebUrl with the you azure api app. We then create HttpWebRequest object and add client certificate to its ClientCertificates collection.

    The GetCertFromFile a custom utility method that actually read the certificate from file as shown below:

            static X509Certificate2 GetCertFromFile(string certPath, string certPassword)
            {
    
                X509Certificate2 cert = new X509Certificate2();
                cert.Import(certPath, certPassword, X509KeyStorageFlags.PersistKeySet);
                
                return cert;
            }
    

    One a side note you can also get the certificate from the certificate store by using attributes like certificate thumbprint. I actually added a custom method GetCertByThumbprint so take a look at that. I won’t discuss it here though.

    Finally we create HttpWebResponse object and then read the response through StreamReader.

    Run the console application.Provide it with the full path to client certificate and password. You should see a successful call to web api and json in the response including certificate thumbprint and common name [cn] in it. Screen shot below. Click on the image to view it in full size.
    Console client calling web api

    If you notice the response is identical to what you see in previous section when we tested the api call with the browser. It does make sense. Our api is hosted inside azure api app and it does not care how the call is made. It is really agnostic of client making the call but all it requires is client certificate passed to it as part of the request over the tls channel. In fact if you run the same program again and just change baseWebUrl from https to http you will notice the headers in the json response. Its because the server and client won’t have tls handshake so there is no reason for server to expect client certificate either. To sum up always use https when working with client certificates.

    Concluding Remarks

    Its already long post so like to sum up few key points. You basically should now have a good understanding about  how to enable client certificate authentication inside a web api hosted on azure api app and also how various client apps including browser, console etc can call the web api by passing client certificate as part of request. This post has a focus on authentication but we haven’t discussed how you can validate the certificate once it arrived to the server application like a web api. As you might rightly imagine you will never use self signed certificate in production.That’s something very important but does requires bit of detail discussion but in short typically a client certificate are issued by the certificate authorities [ca] that an enterprise trusts. Mostly its an enterpeise internal ca that issues client either an individual or an application a cert which then in turn presents it to the server side application like a web api to establish its identity. The server side application goes back to ca and verify the certificate first. Once certificate is verified next step is to use its attributes to establish client identity and then use that identity to perform authorization. Hopefully these topics can be covered in future posts.

    Developer’s Guide to Automated Install of ADFS On Windows Server 2016 (TP4)


    Recently I ran into situations where have to build a developer enviorment that needs active directory federation services [adfs] running on windows server 2016 [currently in technical preview 4, hence w2k16-tp4,]. I am intentionally avoiding term adfs ‘v4’ which is really tempting but its about time to move away from these versions. From now on you can simply refer to it as adfs running on w2k16. So, what I really needed is something that can be up and running in fastest way possible. Its really a pure developer setup focusing on saving time on installation and configuration so no server hardening, least privilege accounts or all those things that are absolutely mandatory for non developer environments like production!

    OK, with that out of the way couple of things: Firstly, I decided to focus on following two pieces in this post and this will get you the up and running the adfs instance on the w2k16.

    • Active directory domain services [adds]
    • Active directory federation services [adfs]

    I also did installed visual studio 2015 and sql server 204 for claims injection but not covering that in this post. Sql server does not like to be installed on domain controller so have to tame that beast to work and my advice would be not do that unless you really have to do it.

    [NOTE: I have tried these steps on windows server 2016 technical preview 4. There is no guarantee that they will also work as-is/at all on any future previews or rtm. Also these instructions and scripts are provided without any  warranty and are not for production usage]

    Choosing the Platform

    All you need to get started is w2k16-tp4 installed and running. I decided to use azure vm to install and host it. You can do it too by going here and follow the instructions. Now, by no means you have to use azure vm so feel free to choose you preferred method to install it either on-premises or in the cloud.

    You now should be looking at login screen before you move to next step. Also, everything we do from this point onwards will be using the account with admin privileges.

    Installing the Active Directory Domain Services

    Adfs needs domain controller so we will first start by installing active directory domain services [adds] by using the powershell script below:

    
    $domainName = "contoso.com"
    $password = "*********"
    $securePassword = ConvertTo-SecureString $password -AsPlainText -Force
    
    Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
    Install-ADDSForest -DomainName $domainName -SafeModeAdministratorPassword&amp;amp;amp;nbsp;$securePassword&amp;amp;amp;nbsp;-Force
    
    

    The above script is straight forward but in case this is your first time installing adds lets take a look at what’s going on. You start by setting up a domain name and the choice of name is really up to you. Next, the password is provided. I will advice choosing a pass phrase which you can remember and should is better than “p@ssw0rd”. Secure string is constructed as needed by cmdlet install-addsforest that does the work of installing adds on the server. The -force switch is there to make sure cmdlet ignore the warnings.

    The above script takes few minutes to completely install the domain controller and operating system will be restarted afterwords. Next, lets install the adfs.

    Installing the Active Directory Federation Services

    Before we jump into installation of adfs we need to procure a certificate as adfs needs it as part of installation and also to function. Creation of certificate is something that needs to be taken care of upfront as shown by script below.

    $fdqn =(Get-WmiObject win32_computersystem).DNSHostName+"."+(Get-WmiObject win32_computersystem).Domain 
    $password = ConvertTo-SecureString -String "********" -Force –AsPlainText 
    $filename = "C:\$fdqn.pfx" 
    
    $selfSignedCert = New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname  $fdqn 
    $certThumbprint = $selfSignedCert.Thumbprint
    Export-PfxCertificate -cert cert:\localMachine\my\$certThumbprint —Password $password -FilePath $filename
    
    #optional - Adding cert to trusted root will help stop browser complaining about self signed cert being not from trusted certificate authority.Just for the record you should never do this setting in non dev environments.  
    
    $pfx = new-object System.Security.Cryptography.X509Certificates.X509Certificate2  
    $pfx.import($filename,$password,"Exportable,PersistKeySet")  
    $store = new-object System.Security.Cryptography.X509Certificates.X509Store([System.Security.Cryptography.X509Certificates.StoreName]::Root,"localmachine") 
    $store.open("MaxAllowed")  
    $store.add($pfx)  
    $store.close() 
      
    

    The fdqn variable is set by using two wmi cmdlets to get the computer name and domain name and then concatenate them with “.” to give us the fully qualified domain name e.g. w2k16-machine.contoso.com which then used for creating a new self signed certificate by the cmdlet new-selfsignedcertificate. From technical standpoint its not a absolute must to use fdqn and you can provide any valid string for the certificate name but this does make the script bit more reusable in my view.

    The password is needed for the next cmdlet export-pfxcertificate that export the certificate to the filesystem. You should provide a pass phrase that you remember for future use. Finally, we export the certificate in .pfx format on the file system. The lines 11-16 are optional but recommended [dev environment only] to avoid browser warnings related to self signed certificates. Basically we are taking the self signed certificate and add it to trusted root certification authorities on local machine.

    We are now ready for adfs to be setup. The install-windowsfeature cmdlet is used with adfs-fedeation as the name of the feature to be installed. This will begin the adfs install and typically it takes several minutes to complete. Next, import the adfs module to get the full set of cmdlets needed for further configuration of adfs.

    The install-adfsfarm is the cmdlet that actually configure the adfs and requiers following parameters:

    • Certificate thumbprint: Provide it with the thumbprint from self signed certificate created in the previous step.
    • Federation service name:  This should match the cn [common name] in the certificate. Self signed cert created earlier has fdqn as common name.
    • ServiceAccountCredential:  This is the domain account that run the adfs service. You will use the same admin account you are using so far. Again, admin account should never be used beyond developer envriomrenit setup of adfs.
    Install-WindowsFeature -IncludeManagementTools -Name ADFS-Federation 
    
    Import-Module ADFS 
     
    $user  = "$env:USERDOMAIN\$env:USERNAME" 
    $password = ConvertTo-SecureString –String "********" –AsPlainText -Force 
    $credential = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $user, $password 
     
    Install-AdfsFarm -CertificateThumbprint $certThumbprint -FederationServiceName $fqdn  -ServiceAccountCredential $credential  
    
    

    One last step is that you must check to see if spn [service principal name] is setup properly for the account running adfs. This step can be automated but for now providing instructions to do it manually.  Should able to do it under a minute.

    • Open cmd prompt and type adsiedit.msc
    • On the adsi edit console right click and choose connect to and then press ok
    • Expand the nodes until you see cn=users
    • Select the user account you choose to install the adfs
    • Right click the user account and select properties
    • Scroll on the attribute editor till you see servicePrincipalName
    • Click Edit
    • You should see the http/{fdqn} listed there and if its not present then add it by using the value to add text box. Remember fdqn is what you been using so far and there is only single backslash “/” and not double “//”

    If everything goes well you should have a working adfs environment ready!

    Testing

    From cmd prompt launch internet explorer [not using edge as it doesn’t like to be launched by admin user process]

    cmd /K "%ProgramFiles%\Internet Explorer\iexplore.exe"
    

    Open the federation metadata by using the url:

    https://{fdqn}/FederationMetadata/2007-06/FederationMetadata.xml
    

    You need to replace the {fdqn} with that of your machine and if you’re following through fdqn variable in the script above can also give you that in case you want to get it via scripting.

    You should now see the browser window displaying the xml [ignore the formatting]  similar to the one shown in the screen shot. You may want to click on the image to see it in its full resolution.

    adfs metadata xml

    Concluding Remarks

    You should now be running adfs farm on a single machine. From here you can go further by installing visual studio, sql server etc. One caveat with sql server though is that it does not like to be installed on domain controller for many very valid/legitimate reasons.  I did tried it so to have everything on a single virtual machine [azure d2 type vm: 14 gb ram + 2 cores + w2k16-tp4 + adfs + sql server 2014 + visual studio 2015] and its does work out fine but have to do some minor tweaks for sql to work . I do think though that sql on a separate machine may be better idea in general just to play nicely with the product even in the dev environment where you do want complete freedom.

    Also on a side note if you’re using azure vm then virtual machine extensions  provides you an option to run the above scripts at time of vm creation [or at any other stage of vm life cycle] so making it super easy to have fully working vm with everything including adfs ready as soon as you create a new vm . That is  perhaps a good topic for a future post.