Part II – Kubernetes DevOps : Introduction to Helm

This is the second post in a multi-part blog series on Kubernetes DevOps using Azure. I am co-authoring this series with the help of my colleague at Microsoft, Daniel Selman. We recently worked on the Kubernetes project together and thought to share our learnings.

In the last post, you got to better understand the application that was going to be deployed in the Kubernetes cluster. In this post, you will learn about the tool called “Helm”.

Part I: Introduction to the Historic Events Microservice
Part II: Getting started with Helm
Part III: VSTS Build (Helm package + containerization of application)
Part IV: VSTS Release (using Helm)
Part V: Lessons Learned – When things go wrong!

So what is Helm?

Do you know how all things Kubernetes are named after nautical terms? This really isn’t any different.
Helm is a package manager for Kubernetes and is analogous to Apt-Get for Linux environments. It is made up of two components: Tiller which is the server-side component, and Helm which is the client-side component. Helm packages are known as charts and by default use a public chart repository. However, they can be configured to use a private repository (like Azure blob storage). Helm charts are written in a mix of YAML and Go Templating Syntax.

image001
Source: https://www.slideshare.net/alexLM/helm-application-deployment-management-for-kubernetes

Helm can be used to empower your dev-ops workflows in two distinct ways. First, it allows for the parameterization of YAML files for K8s deployments. This means that many people can utilize YAML from a shared source without modifying the file itself. Instead, they can pass their individual values at runtime (e.g. a username for a configmap).
For example, to deploy and configure the MySQL Helm Chart you would run the following command:

helm install --name my-release stable/mysql

No more diving into the YAML to get your deployment up and running. Pretty convenient right?

Second, it provides a standardized way of distributing and implementing all the associated YAML for an application. Microservices are cool (minimizing dependencies makes everyone’s lives easier), but they also result in many different containers being necessary to get an application running. Kubernetes augments this sprawl by introducing additional constructs that need to be defined (services, configmaps, secrets). As a result, even basic three tier applications can require almost a dozen k8s constructs (and likely a dozen different YAML files). Even someone who knows the application like the back of their hand likely wouldn’t know how and in what order to deploy these different files.

Helm handles that for you!.

Instead of running a dozen commands to deploy the different components of your application, you throw all your YAML into the templates folder of your chart (we’ll get to that later) and Helm will handle it for you.

image003

Quick note on the YAML we’re working with

A previous blog post went through the process of containerizing our history application. The purpose of this blog is to cover the helm piece of the puzzle but to give you an idea of what we are starting with from a vanilla YAML perspective.
We’ve got four files total for the application- asp-web-dep, asp-web-svc, node-api-dep, node api-svc. All of the containers are being pulled from the Azure Container Registry. I’ll include the four files here for reference.

asp-web-dep.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspcoreweb-dep
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: aspcoreweb
        tier: frontend
        track: stable
    spec:
      containers:
        - name: demowebapp
          image: "rzdockerregistry.azurecr.io/aspcoreweb:BuildNumber"
          ports:
            - name: http
              containerPort: 80
      imagePullSecrets:
        - name: sec

asp-web-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: aspcoreweb-svc
spec:
  selector:
    app: aspcoreweb
    tier: frontend
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80
  type: LoadBalancer

node-api-dep.yaml

kind: Deployment
metadata:
  name: nodeapi-dep
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nodeapi
        tier: backend
        track: stable
    spec:
      containers:
        - name: nodeapi
          image: "rzdockerregistry.azurecr.io/nodeapi:BuildNumber"
          env:
            - name: url
              value: https://rzshared.blob.core.windows.net/data
          ports:
            - name: http
              containerPort: 8080
      imagePullSecrets:
        - name: sec

node-api-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: nodeapi-dep
spec:
  selector:
    app: nodeapi
    tier: backend
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Lets make a chart

If you haven’t yet, get kubectl and helm installed on your machine and have your kubectl configured to point at a Kubernetes Cluster (we’ll be using AKS, which you can get started with here). Helm uses your kube config so it should play nice with your cluster out of the box.
Helm at the time of writing this requires Tiller, the server-side component. Run the following command to initialize tiller on your cluster:

	helm init	

Next, let’s scaffold a chart. When you run a simple Helm create [name] command it will create a basic Nginx chart, which we will replace with the components of our application. First, run the helm create command:

helm create [chart_name]

This will create a new directory with all the elements of the helm chart.

This blog isn’t going to cover all the elements of a Helm chart, but instead focus the templates folder and the values.yaml file. The templates folder is where your YAML will be placed. Its currently populated by the nginx files, so you’ll want to delete all of the content in this folder and replace it with the yaml for your application.

Similarly, delete the content (not the file) of values.yaml. Let’s start from a blank slate and replace it with the following values. The buildNumber will be used later on for the VSTS Pipeline and the imagePullSecret will be used to specify the… well imagePullSecret. Don’t worry about the specific values as these can be updated later on.

buildNumber: BuildNumber
imagePullSecret: acr 

We will make one modification to the YAML files, however. Under the hood, helm has a “Release” object which contains information about the deployment of the helm chart. Specifically, release.name provides a unique identifier for your chart so that you can deploy one chart many times to a cluster without errors associated with overlapping names. We’ve added in a reference to the release name attribute in each of the yaml files as such:

  name: {{ .Release.Name }}-aspcoreweb-dep
  name: {{ .Release.Name }}-aspcoreweb-svc
  name: {{ .Release.Name }}-nodeapi-dep
  name: {{ .Release.Name }}-nodeapi-dep

Lets recap. We’ve initialized tiller on our cluster, scaffolded a helm chart, and threw our (mostly) vanilla YAML files in the templates folder.
Our last step is to package it up for ease of distribution. Navigate to the directory base directory of your helm chart and run the following command:

 helm package

Now your chart can be distributed and installed on your cluster using helm install:

helm install [chart_name]

Now that we have some familiarity with the application, Kubernetes, and helm, we are going to transition to VSTS to handle the Build and Release process from code to chart deployment over the next few blog posts, so make sure to check back as we continue this series.

Part I – Kubernetes DevOps : Introduction to the Historic Events Microservice

This is the first post in a multi-part blog series on Kubernetes DevOps using Azure. I am co-authoring this series with the help of my colleague at Microsoft, Daniel Selman. We recently worked on the Kubernetes project together and thought to share out learnings.

Anyways, below is a high-level structure of the blog posts we are planning to publish:

Part I: Introduction to the Historic Events Microservice
Part II: Getting started with Helm
Part III: VSTS Build (Helm package + containerization of application)
Part IV: VSTS Release (using Helm)
Part V: Lessons Learned – When things go wrong!

We do assume that you have basic knowledge of K8s and Docker containers, as we don’t really cover the basics of either of those in this blog series.

Software/Services

Following is the list of software you want to install on your machine.

• Kubectl
• Helm
• Docker
• Minikube (optional, only needed for local testing)
• Git
• Azure CLI

If you like to use a script to install this software on a Linux VM (tested on Ubuntu 16.04), you can download it here: https://github.com/razi-rais/microservices/blob/master/reference-material/install-k8s-lab-software.sh

On the services side, we will be using Azure AKS and VSTS. In case you don’t have Azure subscription you can get yourself Azure trial for free here: https://azure.microsoft.com/en-us/offers/ms-azr-0044p

Alright, so for the demonstration purposes, we have created a simple Historic Events microservice. We thought it won’t hurt to throw some history while working on modern technologies!

Overview

From a technical perspective, we have a microservice that serves the UI which is written in ASP.NET Core 2.0. It pulls data by talking to various RESTful endpoints exposed by Node JS API that is served by another microservice. The actual content storing the details about historic
events), that is served by API is stored in various JSON files, that are persisted as a blog on Azure Storage.
In a nutshell, from an end user standpoint the web app home page looks like below:

image001

When a user wants to learn more about a particular historic event, they can either select particular historic event from the top menu, or they can simply click on the description of a particular event provided on the home page.

For example, the French Revolution event page is shown below. All event details pages follow similar table based layout to list critical events.

image003

Code Walkthrough

The code and all relevant artifacts are available on GitHub: https://github.com/razi-rais/aks-helm-sample

image005

This is a plain vanilla ASP.NET Core 2.0 web application.

HistoricEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/Event.cs#L8) define a basic entity, that represents an event object. The actual attributes are date and description of historic event.

image007

Most of the actual work happens inside the HomeConroller, which provides methods to connect to backend api service and fetch the data.

The GetEvent (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L38) method takes a url of an endpoint as a parameter. It then connects to the url endpoint and read the content as a string asynchronously but ultimately converting it into JSON objects stored in a List of type HistoricEvent. Finally, it returns the List object containing all the events.

image009

If you are wondering who call GetEvent it is inside the method called Event. (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Controllers/HomeController.cs#L54)

image011

The is basically an action tied to the View. The parameter id essentially acts as a key referring to the event we are interested to fetch from the backend service (e.g. ww2, ww1 etc). The method itself is trivial and we have left most of the optimization out. It does the bare minimum at the moment of printing on the console which endpoint its going to connect and port at the moment is set to 8080. Finally, it calls GetEvent to return the HistoricEvent objects stored in the List and send them back as a View.

The Event.cshtml (https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Views/Home/Event.cshtml) View presents the list of events in a table format.

image013

Data Api (NodeJS)

The backend service code is placed inside NodeJSApi folder
image015

The server.js runs the server that listens to port 8080.
Since the actual files containing the event data are stored on Azure Blob Storage, we set the URL variable to the blob storage endpoint, which is passed through an environment variable.

Let’s take a look at the endpoint that returns ww1 (World War 1) related events (https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/server.js#L22). First, it connects to the URL, which points to the Azure Blob file e.g. (https://name. blob.core.windows.net/data/ww1) and then it reads the relevant JSON file (e.g. ww1.json). We do check to see if the status is 200, meaning the file is pulled from the blob, in which case the content of the response is set to the JSON.

image017

Historic Events JSON Files

All the data related to various historic events is available in the JSON file format. You can find the link of each of the historic event JSON file below.

 

NOTE: Azure blob storage requires file names to be in the lower case.

 

Name Description URL
frenchrevolution French Revolution https://github.com/razi-rais/aks-helm-sample/blob/master/data/frenchrevolution.json
renaissance Renaissance https://github.com/razi-rais/aks-helm-sample/blob/master/data/renaissance.json
ww1 World War I https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww1.json
ww2 World War II https://github.com/razi-rais/aks-helm-sample/blob/master/data/ww2.json

Docker Files

Both the front end and back end service are packaged as Docker Linux container image.

1. Frontend UI: https://github.com/razi-rais/aks-helm-sample/blob/master/aspcoreweb/Dockerfile

2. Backend API: https://github.com/razi-rais/aks-helm-sample/blob/master/nodejsapi/Dockerfile

Building and Running an Auditing Solution on Blockchain

On 21st February, I will be conducting an event at Microsoft NYC campus on building and running a fully functional blockchain based audit trail application.

The first half is a good fit for both business and technical audiences, as it covers auditing scenarios using blockchain. The latter half will showcase an open source project that provides tracking of Wikipedia change logs using blockchain.

I will do a deep dive into the running solution that leverages the Ethereum Rinkeby Network. I will showcasing open source project
Wikipedia logs change tracking” that I am currently working on.

Session Summary

  • 6:30 PM | Overview of auditing capabilities of blockchain
  • 7 PM – 9 PM | Project Showcase – Tracking/Auditing Changes from Wikipedia Logs.
  • Q&A + Demos
  • Developers are encouraged to bring their laptops running Mac OS or Windows 10 (or Windows Server 2016). Instructions to setup the project will be provided during the session.

    Building and Running an Auditing Solution on Blockchain

    Wednesday, Feb 21, 2018, 6:30 PM

    Microsoft
    11 Times Square New York, NY

    53 Members Went

    • What we’ll do In this session, you will learn about how to build and run a fully functional blockchain based audit trail application. As usual, everyone is welcome. The first half is a good fit for both business and technical audiences, as it covers auditing scenarios using blockchain. The latter half will showcase an open source project that pro…

    Check out this Meetup →

    Understanding R3 Corda and Running it on Azure

    R3 Corda is a blockchain-inspired distributed ledger technology (DLT) from R3 that is specifically designed for financial and regulated transactions, and emphasizes privacy and security between participants. While it is generally available for download as open source code (corda.net), R3 also makes it available on the Azure platform and has plans to integrate Corda with numerous Azure capabilities.

    R3 recently secured 107 million USD investment including SBI Group, Bank of America Merrill Lynch, HSBC, Intel & Temasek. R3 has globally diverse group of investors represents an equal geographical split across Europe, Asia-Pacific and the Americas, counting over 40 participants from over 15 countries.

    • Banco Bradesco
    • Bangkok Bank
    • Bank of America Merrill Lynch
    • Bank of Montreal
    • Bank of New York Mellon
    • Barclays
    • BBVA
    • BNP Paribas
    • B3 (BM&FBOVESPA and Cetip)
    • Canadian Imperial Bank of Commerce
    • Citi
    • Commerzbank
    • Commonwealth Bank of Australia
    • Credit Suisse
    • CTBC Financial Holding
    • Daiwa Securities Group
    • Danske Bank
    • Deutsche Bank
    • HSBC
    • ING
    • Intel Capital
    • Intesa Sanpaolo [2]
    • Itaú Unibanco S.A.
    • Mitsubishi UFJ Financial Group (MUFG)
    • Mizuho
    • Natixis
    • Nomura
    • Nordea Bank
    • Northern Trust
    • OP Cooperative
    • Ping An
    • Royal Bank of Canada
    • SBI Group
    • SEB
    • Societe Generale
    • Sumitomo Mitsui Banking Corporation
    • TD Bank Group
    • Temasek
    • The Bank of Nova Scotia
    • The Royal Bank of Scotland
    • U.S. Bank
    • UBS AG
    • Wells Fargo
    • Westpac

    As demand of R3 Corda is increasing and Microsoft supports running it on Azure through Azure MarketPlace  I decided to have a discussion about R3 Corda during our next NYC Azure User Group meeting in October.

    To talk about R3 Corda and their partnership with Azure I have invited Tom Menner (Director and Solutions Architect at R3) to deliver a talk for my NYC Azure User Group. Since we are based in Manhattan we had significant amount of users associated with financial companies and based on their feedback this session certainly resonated with them.

    Tom predominantly covered following topics:

    • Understand what Corda is and how it differs from blockchain platforms such as Ethereum and Hyperledger Fabric;
    • Use cases of Corda
    • Corda on Azure and R3’s partnership with Microsoft

    If you like to view or download the slides used during the session I have made them available on SlideShare.

    Homomorphic Encryption 101

    I was recently exploring methods for improved privacy using various encryption schemes and stumbled upon Homomorphic Encryption that has a huge potential  in that area. I do feel that it has higher barrier to entry considering the complexity and level of maturity it has today. If you’re looking for learning resources/libraries to get started on it take a look at Git repo that I have created for purpose of sharing resources around Homomorphic Encryption.

    At a very high level Holomorphic Encryption allows you to perform basic matematical computations (+,-,x,/) on encrypted data (cipher text) without need to have un-encrypted data (plaintext). This ability to perform operations on encrypted data has many high impact use cases.

    Just to give you an idea lets say you like to leverage a service hosted by a cloud provider but you don’t want to reveal the data to could provider without it being encrypted.  The biggest challenge today is without access to actual data (in decrypted form) there is very limited useful operations that can be performed on it. However, with Homomorphic Encryption  cloud provider can take your data which in encrypted form then process it without decrypting it and then gives you back the result which is also encrypted. At no point  your data is revealed to the cloud provider in decrypted form.

    The biggest beneficiary of this type of encryption is privacy. So, at this point you may ask well if this is so useful why it hasn’t been adopted/used commercially on a wider scale? Well, the short answer is that Homomorphic Encryption is still in its infancy. This article call out some of the challenges that you may want to look into. In short its still being actively worked upon and organization like NIST are working towards its standardization.

    Finally, let me leave you with a simple example using Python Paillier library. I will use set of numbers and encrypt them using private key and then use the library (think of it as cloud provider though I’m running everything on my laptop using docker container) perform the mathematical operations (+,-.*,/) on the numbers while they are encrypted. Only thing the library needs is the public key. After the operations  are done the results are provided back which are also encrypted. At the end you decrypt the results using your private key. In short at no point library has access to your un-encrypted data. There is another library that is quite useful for trying Homomorphic Encryption called  SEAL (Simple Encrypted Arithmetic Library) by Microsoft which I also experimented with but going to cover it in a separate post.

    As I mentioned earlier, I’m using Docker container image that I created to package the Python Paillier library which is available on DockerHub.

    
    #Launch Docker container and remove it automatically afterwords.
    docker run --rm -it rbinrais/python-paillier:1.2.2 bash
    
    
    xxxxxx@xxxxxxxxxxx:/# python3
    Python 3.5.2 (default, Nov 17 2016, 17:05:23) 
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    
    #Import Library
    from phe import paillier
    
    #Generate a Private/Public Key Pair
    public_key, private_key = paillier.generate_paillier_keypair() 
    
    #Define Numbers 
    secret_number_list = [12, 2.89763, -4.6e-12] 
    
    #Encrypt Numbers (Using Public Key)
    encrypted_number_list = [public_key.encrypt(x) for x in secret_number_list]
    
    #List Encrypted Numbers
    encrypted_number_list
    [<phe.paillier.EncryptedNumber object at 0x7efd57c0f630>, <phe.paillier.EncryptedNumber object at 0x7efd57c16358>, <phe.paillier.EncryptedNumber object at 0x7efd553229b0>]
    
    #Decrypt Numbers (Using Private Key)
    [private_key.decrypt(x) for x in encrypted_number_list]
    [12, 2.89763, -4.6e-12]
    
    #Perform Mathematical Operations 
    a, b, c = encrypted_number_list
    a_plus_10 = a + 10
    a_mins_b = a - b 
    b_times_4_7 = b * 4.7 
    c_div_33 = c / 33
    
    #Display Encrypted Results 
    a_plus_10
    <phe.paillier.EncryptedNumber object at 0x7efd57c0f668>
    
    a_mins_b
    <phe.paillier.EncryptedNumber object at 0x7efd57c0f5c0>
    
    b_times_4_7
    <phe.paillier.EncryptedNumber object at 0x7efd55d03240>
    
    c_div_33 
    <phe.paillier.EncryptedNumber object at 0x7efd55d03978>
    
    #Decrypt Results using Private Key
    private_key.decrypt(a_plus_10)
    22
    
    private_key.decrypt(a_mins_b) 
    9.10237
    
    private_key.decrypt(b_times_4_7)
    13.618861
    
    private_key.decrypt(c_div_33)   
    -1.393939393939394e-13
    
    

    Creating Developer’s Docker Linux Virtual Machine on Azure


    For an upcoming developer event on Docker I had to create handful of Linux Ubuntu virtual machines on Azure with Docker and few additional software installed on it.

    I looked into couple of ways to to do that on Azure in a consistent fashion. The first option was to use DevTest labs and use artifacts. Another option is to use Custom Extensions. There are other options including creating your own base virtual machine image with all the software installed and then upload it on Azure. I picked custom extension approach mainly because its the simplest approach and I knew the software that I needed to install won’t take more than ~5 minutes on average. It also has a reasonable tradeoff (speed of deployment versus managing your own virtual machine image etc.)

    Anyways, the actual process to leverage custom extensions are rather straightforward. Create the scripts. Create the scripts and then call them in your ARM Template (which is a JSON file).

    Here is the complete list of software. I choose to use Ubuntu 16.04 LTS Azure Virtual Machine image so that wasn’t needed to be installed.

    • Docker (Engine & Client)
    • Git
    • Nodejs
    • Dotnetcore
    • Yeoman
    • Bower
    • Azure Command Line Interface (CLI)

    The approach I took was to create a single script file  for each one of them to keep things simple and clean.

    2017-07-22_14-13-45

    Once done with the scripts all I need to do is reference/call the install.sh script from the custom extension. Take a look at it on at line 211 in JSON.

    If you like to look at the code artifacts I have made them available at Git repo. You can also simply try out creating a virtual machine by single clicking “Deploy on Azure” button. You do need an active Azure subscription before you can deploy virtual machine on Azure.

    2017-07-22_14-29-56

    Event Announcement “Blockchain 101 – Introduction for Developers”


    Some of you may already be aware that I host NYC MS Cloud User Group technology meet up every month at Microsoft Manhattan campus.This month, I will be hosting/presenting alongside with my colleague Cale Teeteron blockchain.I did a similar session earlier this year in January and turnout was great and based on feedback doing another session in July.

    Here is the brief agenda:

    • Learn basics of blockchain. What exactly is a block? How blocks are created? What are transactions?
    • Understand what is a transaction and role of mining.
    • Learn what are smart contracts and how to write them in solidity.
    • Demos (mostly based on Ethereum but will talk about other chains too as its important to understand the overall landscape)

    Blockchain 101 – Introduction for Developers

    Monday, Jul 31, 2017, 6:30 PM

    Location details are available to members only.

    62 Members Attending

    Hello everyone!Excited to announce first session for summer and its on blockchain! (again considering the demand)Here is what Gartner predicts about blockchain:·  By 2022, at least one innovative business built on blockchain technology will be worth $10 billion.·  By 2030, 30% of the global customer base will be made up of things, and those thi…

    Check out this Meetup →

     

    DevOps with Containers

    Recently I did a video series for Microsoft Channel9 on DevOps with Containers (thanks to Lex Thomas and Chris Caldwell for recording these). The idea was simple- show and tell how container technology can help in improving the DevOps experience.

    It’s a ~2-hour long recording (divided into three parts for easy viewing) covers topics including containerization of applications, continuous integration and deployment of containerized applications using Visual Studio Team Services, Azure Container Services, Docker Swarm, DC/OS and monitoring containers using Operations Management Suite and 3rd party tools.

    Here is the break down of each session. If you’re interested in looking at the sample application that I have deployed in the last session (asp net core web app and ape) its available on my Git repo.

    Part 1 – Getting Started with Containers

    In the first part the focus is to introduce the basic concepts of containers and the process of application containerization. I did target Windows Containers in this part though later parts do show how to leverage multi-container applications based on ASP.NET Core using Linux container. If you wanted to try Windows Containers I have provided this link that will allow you to automatically provision Windows Server 2016 Virtual Machine with containers support (including docker-compose). Also, the Azure ARM Template that provisions the virtual machine is available here.

    • [2:01] What is a Container and how can it benefit organizations?
    • [5:20DEMO: Windows Containers 101- Basics and Overview
    • [9:33DEMO: How to create a Container on Nano Server
    • [15:39DEMO: Windows Server Core and Containers
    • [19:36DEMO: How to containerize legacy ASP.NET 4.5 application
    • [43:48DEMO: Running  Microsoft SQL Server Express inside Container

    Part 2 – Building CI/CD pipeline with VSTS and Azure Container Service

    The second part focuses on building a Continuous Integration (CI) and Continuous Deployment (CD) pipeline for multi-container applications using Visual Studio Team Services (VSTS) with deployment target of Azure Container Service (ACS) hosting DC/OS and Docker Swarm.

    I developed a sample application that represents a canonical web app and ape (in this case I used ASP.NET Core 1.1 but really can be NodeJS, Python , Java etc.). Then demos show workflow that starts by submitting code along with Dockerfile and docker-compose that actually will be used by VSTS build to create a new container image every time build is run {container name:buildnumber} format. Containers are hosted in Azure Container Registry which is a private DTR (docker trusted registry). After container image is ready the continuous deployment happens and VSTS kicks off the release which targets both DC/OS and Docker Swarm that are actually hosted on Azure Container Service (ACS).

    • [2:54] The Big Picture – Making DevOps successful
    • [6:34DEMO: Building a Continuous Integration and Continuous Deployment system with Azure Container Service and Visual Studio Team System
      • Multi-Container Application | ASP.NET Core
      • Container Images Storage | Azure Private Docker Registry
      • Build & Release Deployment | Visual Studio Team System

    Part 3 (Final) – Monitoring and Analytics

    This is the final part which focuses on doing Monitoring and Analytics of container applications running on Azure Container Service. Microsoft Operations Management Suite (OMS) is the primary service used in the demos but I did mention 3rd party services that are supported on Azure Container Service and provide monitoring, analytics and debugging functionality

    • [3:20] Does Orchestration = Containers?
    • [5:40] DEMO: Monitoring and Analytics

    Final Thoughts

    Containers are a massively useful technology for both Green Field and Brown field based application development. Also, organizations today have various levels of maturity when it comes to DevOps and containers provide them with a great option to enable DevOps in an effective way. Off course there are considerations like learning curve, lack of proven practices and reference architectures compared to traditional technologies. However, this is going to be lesser concern as with time, the knowledge gap is going to be filled and reference architectures will emerge.

    Finally, you should also broaden your design choices to include a combination of containers with server less computing (e.g. Azure Function which actually runs inside a container itself!). This is a particularly interesting option when your service is mainly stateless. This is something I would like to cover in future blog post.

    First Look Into Blockchain

    Since last year, I have been spending time with customers understanding how blockchain can help them improve/replace existing processess. Its relatively a new technology but evolving very fast. Anyways, recorded an hour long video session  First Look Into Blockchain for Channel9. Its predominately focus on blockchain from developers perspective.

    • [0:57] What is Blockchain?
    • [2:14] How is this different than a standard distributed database?
    • [5:16] DEMO: Introduction and Overview of Blockchain in a Dev/Test lab on Azure
    • [30:40] DEMO: Blockchain and .NET

    How To Set Custom Resolution On MacBook Pro Retina

    I recently experience an inconvenience when I discovered that on MacBook Pro Retina running OS X El Captain (and even few of its predecessors as I found out) you cannot set custom resolution. Well long story short for recording a video I need to set resolution to 1270×720 which was not among the preset choices of resolution that apple provides. You can see available choices in the screen shot below (click on the image to see it in higher resolution)

    MacBook Pro Retina Display Settings

    As someone who regularly use Windows OS this came across as limitation in the Apple operating system.

    I manage to find a working solution to this problem. Apparently there is a software Retina DisplayMenu that provides list of most common resolutions to select for and it works perfectly. You can see all the available resolutions in the screen shot below (click on the image to see it in higher resolution)

    Retina DisplayMenu Resolutions

    Retina DisplayMenu is available for download here. Please note that I do not own this software nor can vouch for its quality so please use it at your own discretion.