Part I – Kubernetes DevOps : Introduction to the Historic Events Microservice

This is the first post in a multi-part blog series on Kubernetes DevOps using Azure. I am co-authoring this series with the help of my colleague at Microsoft, Daniel Selman. We recently worked on the Kubernetes project together and thought to share out learnings.

Anyways, below is a high-level structure of the blog posts we are planning to publish:

Part I: Introduction to the Historic Events Microservice
Part II: Getting started with Helm
Part III: VSTS Build (Helm package + containerization of application)
Part IV: VSTS Release (using Helm)
Part V: Lessons Learned – When things go wrong!

We do assume that you have basic knowledge of K8s and Docker containers, as we don’t really cover the basics of either of those in this blog series.

Software/Services

Following is the list of software you want to install on your machine.

• Kubectl
• Helm
• Docker
• Minikube (optional, only needed for local testing)
• Git
• Azure CLI

If you like to use a script to install this software on a Linux VM (tested on Ubuntu 16.04), you can download it here: https://github.com/razi-rais/microservices/blob/master/reference-material/install-k8s-lab-software.sh

On the services side, we will be using Azure AKS and VSTS. In case you don’t have Azure subscription you can get yourself Azure trial for free here: https://azure.microsoft.com/en-us/offers/ms-azr-0044p

Alright, so for the demonstration purposes, we have created a simple Historic Events microservice. We thought it won’t hurt to throw some history while working on modern technologies!

Overview

From a technical perspective, we have a microservice that serves the UI which is written in ASP.NET Core 2.0. It pulls data by talking to various RESTful endpoints exposed by Node JS API that is served by another microservice. The actual content storing the details about historic
events), that is served by API is stored in various JSON files, that are persisted as a blog on Azure Storage.
In a nutshell, from an end user standpoint the web app home page looks like below:

image001

When a user wants to learn more about a particular historic event, they can either select particular historic event from the top menu, or they can simply click on the description of a particular event provided on the home page.

For example, the French Revolution event page is shown below. All event details pages follow similar table based layout to list critical events.

image003

Code Walkthrough

The code and all relevant artifacts are available on GitHub: https://github.com/razi-rais/aks-helm-sample

image005

This is a plain vanilla ASP.NET Core 2.0 web application.

HistoricEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/Event.cs#L8) define a basic entity, that represents an event object. The actual attributes are date and description of historic event.

image007

Most of the actual work happens inside the HomeConroller, which provides methods to connect to backend api service and fetch the data.

The GetEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L38) method takes a url of an endpoint as a parameter. It then connects to the url endpoint and read the content as a string asynchronously but ultimately converting it into JSON objects stored in a List of type HistoricEvent. Finally, it returns the List object containing all the events.

image009

If you are wondering who call GetEvent it is inside the method called Event. (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L54)

image011

The is basically an action tied to the View. The parameter id essentially acts as a key referring to the event we are interested to fetch from the backend service (e.g. ww2, ww1 etc). The method itself is trivial and we have left most of the optimization out. It does the bare minimum at the moment of printing on the console which endpoint its going to connect and port at the moment is set to 8080. Finally, it calls GetEvent to return the HistoricEvent objects stored in the List and send them back as a View.

The Event.cshtml (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Views/Home/Event.cshtml) View presents the list of events in a table format.

image013

Data Api (NodeJS)

The backend service code is placed inside NodeJSApi folder
image015

The server.js runs the server that listens to port 8080.
Since the actual files containing the event data are stored on Azure Blob Storage, we set the URL variable to the blob storage endpoint, which is passed through an environment variable.

Let’s take a look at the endpoint that returns ww1 (World War 1) related events (https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/server.js#L22). First, it connects to the URL, which points to the Azure Blob file e.g. (https://name. blob.core.windows.net/data/ww1) and then it reads the relevant JSON file (e.g. ww1.json). We do check to see if the status is 200, meaning the file is pulled from the blob, in which case the content of the response is set to the JSON.

image017

Historic Events JSON Files

All the data related to various historic events is available in the JSON file format. You can find the link of each of the historic event JSON file below.

 

NOTE: Azure blob storage requires file names to be in the lower case.

 

Name Description URL
frenchrevolution French Revolution https://github.com/razi-rais/aks-helm-sample/blob/master/data/frenchrevolution.json
renaissance Renaissance https://github.com/razi-rais/aks-helm-sample/blob/master/data/renaissance.json
ww1 World War I https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww1.json
ww2 World War II https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww2.json

Docker Files

Both the front end and back end service are packaged as Docker Linux container image.

1. Frontend UI: https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Dockerfile

2. Backend API: https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/Dockerfile

DevOps with Containers

Recently I did a video series for Microsoft Channel9 on DevOps with Containers (thanks to Lex Thomas and Chris Caldwell for recording these). The idea was simple- show and tell how container technology can help in improving the DevOps experience.

It’s a ~2-hour long recording (divided into three parts for easy viewing) covers topics including containerization of applications, continuous integration and deployment of containerized applications using Visual Studio Team Services, Azure Container Services, Docker Swarm, DC/OS and monitoring containers using Operations Management Suite and 3rd party tools.

Here is the break down of each session. If you’re interested in looking at the sample application that I have deployed in the last session (asp net core web app and ape) its available on my Git repo.

Part 1 – Getting Started with Containers

In the first part the focus is to introduce the basic concepts of containers and the process of application containerization. I did target Windows Containers in this part though later parts do show how to leverage multi-container applications based on ASP.NET Core using Linux container. If you wanted to try Windows Containers I have provided this link that will allow you to automatically provision Windows Server 2016 Virtual Machine with containers support (including docker-compose). Also, the Azure ARM Template that provisions the virtual machine is available here.

  • [2:01] What is a Container and how can it benefit organizations?
  • [5:20DEMO: Windows Containers 101- Basics and Overview
  • [9:33DEMO: How to create a Container on Nano Server
  • [15:39DEMO: Windows Server Core and Containers
  • [19:36DEMO: How to containerize legacy ASP.NET 4.5 application
  • [43:48DEMO: Running  Microsoft SQL Server Express inside Container

Part 2 – Building CI/CD pipeline with VSTS and Azure Container Service

The second part focuses on building a Continuous Integration (CI) and Continuous Deployment (CD) pipeline for multi-container applications using Visual Studio Team Services (VSTS) with deployment target of Azure Container Service (ACS) hosting DC/OS and Docker Swarm.

I developed a sample application that represents a canonical web app and ape (in this case I used ASP.NET Core 1.1 but really can be NodeJS, Python , Java etc.). Then demos show workflow that starts by submitting code along with Dockerfile and docker-compose that actually will be used by VSTS build to create a new container image every time build is run {container name:buildnumber} format. Containers are hosted in Azure Container Registry which is a private DTR (docker trusted registry). After container image is ready the continuous deployment happens and VSTS kicks off the release which targets both DC/OS and Docker Swarm that are actually hosted on Azure Container Service (ACS).

  • [2:54] The Big Picture – Making DevOps successful
  • [6:34DEMO: Building a Continuous Integration and Continuous Deployment system with Azure Container Service and Visual Studio Team System
    • Multi-Container Application | ASP.NET Core
    • Container Images Storage | Azure Private Docker Registry
    • Build & Release Deployment | Visual Studio Team System

Part 3 (Final) – Monitoring and Analytics

This is the final part which focuses on doing Monitoring and Analytics of container applications running on Azure Container Service. Microsoft Operations Management Suite (OMS) is the primary service used in the demos but I did mention 3rd party services that are supported on Azure Container Service and provide monitoring, analytics and debugging functionality

  • [3:20] Does Orchestration = Containers?
  • [5:40] DEMO: Monitoring and Analytics

Final Thoughts

Containers are a massively useful technology for both Green Field and Brown field based application development. Also, organizations today have various levels of maturity when it comes to DevOps and containers provide them with a great option to enable DevOps in an effective way. Off course there are considerations like learning curve, lack of proven practices and reference architectures compared to traditional technologies. However, this is going to be lesser concern as with time, the knowledge gap is going to be filled and reference architectures will emerge.

Finally, you should also broaden your design choices to include a combination of containers with server less computing (e.g. Azure Function which actually runs inside a container itself!). This is a particularly interesting option when your service is mainly stateless. This is something I would like to cover in future blog post.