Kubernetes and Container Registry

In this blog we are going to show how to set up an Azure Kubernetes Cluster, an Azure Container Registry and run a container on AKS.

I’m presuming we all know what Kubernetes is, because going into that subject will take a few extra episodes of the blog. For more information take a look at the following:

AKS Documentation
ACR Documentation

In this blog we are going to create everything from scratch to setup a new Container Registry and a Kubernetes Cluster.

We need the following items.

  • Resource group
  • Key Vault
  • Service Principle
  • Container Registry
  • Kubernetes Cluster
  • Docker image

We also need to following software installed:
if you have AZ CLI tools installed you can do this via:

az aks install-cli

otherwise it can be downloaded from:


DOCKER Desktop
Can be downloaded from https://hub.docker.com

In the code we’ll be using the newer AZ commands instead of AzureRM. The ARM templates that are used are mostly the default ones provided by Microsoft but we’ve added that creation of some default tags to it.
The ARM templates and source code for this blog can be downloaded from:

First connect to your subscription

$Cred = Get-Credential  
Connect-AzAccount -SubscriptionId "[your sub id]" -Credential $Cred

Now we set some default parameters:

$ResourceGroupName        = 'rdk-akstest'
$ResourceGroupLocation    = 'WestEurope'
$ServicePrincipalName     = 'AKSClusterDemo'
$KeyVaultName             = 'AKSKeyVaultRDK'
$ClusterName              = 'rdkakscluster'
$Owner                    = 'Rex de Koning'
$RegistrySKU              = 'Basic'
$RegistryName             = 'methosregistry'
$DockerEmail              = 'rex@methos.nl'
$AgenVMSize               = 'Standard_DS2_v2'
$KubernetesVersion        = '1.12.8'
$NetworkPlugin            = 'kubenet'
$AgentCount               = 1
$Tags =  @{ Owner="$Owner" };
$ServicePrincipalName += $ResourceGroupName

First we make sure our Resource Group exists:

#Create resource group
$resourceGroup = Get-AzResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue
if(!$resourceGroup) {
    New-AzResourceGroup -Name $resourceGroupName -Location $ResourceGroupLocation -Tag $Tags

After this we create our KeyVault:

$keyVault = Get-AzKeyVault -VaultName $KeyVaultName -Tag $Tags
if (!$keyVault) {
    $keyVault = New-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName -Location $ResourceGroupLocation -EnabledForTemplateDeployment -Tag $Tags

When we have the KeyVault we can create our service principle and store the data in our KeyVault.

In this demo we create a serviceprincipal without defining any fine-grained roles and/or rights. By only supplying the Displayname a serviceprinciple without any specific rights is created and an ApplicationID is generated.

$servicePrincipal = Get-AzADServicePrincipal -DisplayName $ServicePrincipalName
if (!$servicePrincipal) {
    $servicePrincipal = New-AzADServicePrincipal -DisplayName $ServicePrincipalName
    $Ptr = [System.Runtime.InteropServices.Marshal]::SecureStringToCoTaskMemUnicode($servicePrincipal.Secret)
    $result = [System.Runtime.InteropServices.Marshal]::PtrToStringUni($Ptr)
    Set-AzKeyVaultSecret -VaultName $KeyVaultName -Name $servicePrincipal.ApplicationId -SecretValue $servicePrincipal.Secret -Tag $Tags
    $ServicePrincipalSecret = $result
} else {
    $ServicePrincipalSecret = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $servicePrincipal.Id).SecretValueText

We can now create our Container Registry, this will also return the admin credentials that are created, and we also store those in our KeyVault:

$CRSParameters = @{
    "registryName"     = $RegistryName
    "registryLocation" = $ResourceGroupLocation
    "registrySku"      = $RegistrySKU
    "adminUserEnabled" = $true
$UserName = ""
$Password = ""
$Server = ""
$CRSDeploy = New-AzResourceGroupDeployment -Name "Deployment" -ResourceGroupName $ResourceGroupName -TemplateFile .\crs.json -TemplateParameterObject $CRSParameters #-Verbose 
$CRSDeploy.Outputs.GetEnumerator() | ForEach-Object {
    $myObject = $_
    switch($_.Key) {
        "registryUsername" { $UserName = $myObject.value.Value; break }
        "registryPassword" { $Password = $myObject.value.Value; break }
        "registryServer"   { $Server   = $myObject.value.Value; break }
        default { break }

$Password = ConvertTo-SecureString -String $Password -AsPlainText -Force
Set-AzKeyVaultSecret -VaultName $KeyVaultName -Name $UserName -SecretValue $Password -Tag $Tags

Now we can create our AKS cluster. For this demo we our only going to create 1 node ( $AgentCount ). Normally you would create at least 3 nodes. Also the serviceprincipal that is used for this deployment has no specific rights as mentioned before. Normally you would create a serviceprincipal with specific rights so that the serviceprincipal has access to the container registry, create a loadbalancer when needed.

In one of the next blogs we will explain more about serviceprincipals and rights/role assignment.

$DeployParameters = @{
    "resourceName"                 = "$ClusterName"
    "location"                     = "$ResourceGroupLocation"
    "dnsPrefix"                    = "$ClusterName"
    "agentCount"                   = $AgentCount
    "agentVMSize"                  = "$AgenVMSize"
    "servicePrincipalClientId"     = "$($servicePrincipal.ApplicationId)"
    "servicePrincipalClientSecret" = "$ServicePrincipalSecret"
    "kubernetesVersion"            = "$KubernetesVersion"
    "networkPlugin"                = "$NetworkPlugin"
    "enableRBAC"                   = $true
    "enableHttpApplicationRouting" = $false
    "Owner"                        = "$Owner"
$Deployment = New-AzResourceGroupDeployment -Name "Deployment" -ResourceGroupName $ResourceGroupName -TemplateFile .\aks.json -TemplateParameterObject $DeployParameters

We can also output the created clustername:

$Deployment.Outputs.GetEnumerator() | ForEach-Object {
    Write-Output "$($_.Key) : $($_.value.Value)"

At this time we can begin to work with our cluster. First we need our credentials. We have our normal credentials which we can get via:

# Get AKS Cluster Credentials for kubectl
Import-AzAksCredential -ResourceGroupName $ResourceGroupName -Name $ClusterName -Force

If needed for any reason it is also possible to get the admin user via:

# Get Admin user
#Import-AzAksCredential -ResourceGroupName $ResourceGroupName -Name $ClusterName -Admin -Force

After we imported the AKS credentials we can check if our node(s) are up using:

#Check if our nodes are up
kubectl get nodes --output=wide

When our nodes are up we can start connecting to our container registry, for this we first get the password from the KeyVault

#Get dockerpassword from Vault
$DockerPassword = Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $UserName

Write-Output "Login to registry"
$DockerPassword.SecretValueText | docker login $server -u $UserName --password-stdin

Now that we have everything in place we can start creating a docker image or re-use an existing one. For now we are going to re-use an existing NGINX Demo docker image which contains Hello World

Write-Output "Download default Hello-World image"
docker pull nginxdemos/hello

Write-Output "Re-tag image"
docker tag nginxdemos/hello $server/hello:1.0

Write-Output "Push Image to CRS"
docker image push $server/hello:1.0

We now re-tagged an existing imaged and pushed it to our own private container registry.

After this it is time to link our AKS Cluster to our Container Registry

#Create secret to Link AKS to CRS
kubectl create secret docker-registry $server --docker-server=$server --docker-username=$UserName --docker-password=$($DockerPassword.SecretValueText) --docker-email=$DockerEmail

We can issue a command to check the contents of the now created secret

#Check the secret
kubectl describe secret

We can create our kubernetes yaml to start our own pods. In this case we fill a variable with the content but it could also be a file.

$yaml = @"
apiVersion: apps/v1beta1
kind: Deployment
  name: my-api
  replicas: 1
        app: my-api
      - name: my-api
        image: $server/hello:1.0
        - containerPort: 80
      - name: $server
apiVersion: v1
kind: Service
  name: my-api
  type: LoadBalancer
  - port: 80
    app: my-api

Now that we have the YAML we can create a deployment to Kubernetes.
For this demo we use the output of $yaml via piping to STDIN as input for kubectl create -f as specified by ‘-‘.

#create deployment using yaml content via STDIN
$yaml | kubectl create -f -

If you would like to use a file with YAML content you can use:

kubectl create -f .\file.name

We can get the deployment status via:

kubectl get service/my-api

When the deployment is ready the external IP will be visible and it can be used to open our demo page in the browser:

in my current case:


So this shows, that in little time and with little effort several services can be spun up in Azure and you can ran your own docker images on an Azure Kubernetes Cluster.

It is also possible to connect to a web-based dashboard for our AKS Cluster. For this we issue the following commands:

#Get Kubernetes Dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
kubectl proxy


When first accessing the dashboard it will ask for the .kubeconfig file. Under windows this is: %userprofile%\.kube\config

To clean up the resources we just creates we can issue the following commands:

# Clean up Azure resources
Remove-AzAks -ResourceGroupName $ResourceGroupName -Name $ClusterName -Force
Remove-AzADServicePrincipal -DisplayName $ServicePrincipalName -Force
Remove-AzADApplication -DisplayName $ServicePrincipalName -Force
Remove-AzResourceGroup -Name $ResourceGroupName -Force

The Automated Career Gap

Tags :

Category : Uncategorized

Automation in IT is great! It removes boring tasks, gets managers off your back and lowers the FTE required for complete departments. But the widespread use of Automation, Desired State Configuration and Infrastructure As Code in IT has led to a foreseen but untackled side effect:

The step up from a junior position is getting bigger and bigger.

Here’s how this works.

Imagine a junior sys admin, Edd, starting out in a new position. What are the kinds of tasks that he gets entrusted with? Creating new users, doing password resets and sorting out which tickets should be sent back to the service desk (or fixing it himself since that’s less of a hassle) and which should go on to the senior sys admins.

After the first few weeks, Edd gets his hands on a request for change for some new server VM and gets his first chance at writing up a plan, getting it reviewed and installing and configuring the actual VM. After a year or two, Edd has seen and done enough in the environment to be left in control of the day-to-day IT operation.

While Edd is running the operation, the seniors have started automating. They decided to use Powershell, Azure Resource manager, Desired State Configuration and Azure functions to automate most IT tasks, ranging from password resets by the end user up to spinning up new, fully configured VM’s. Now all the things he used to do at the service desk and most of his current daily work is automated (this is of course an ideal hypothetical situation), leaving him with routing the tickets from the first line service desk to the senior sys admins. Anything that can’t be handled by the service desk is either automated or too complex for Edd to handle on his own.

The Aftermath

Luckily for Edd, the seniors involve him in the automation process and the few incidents that still pop up in the IT operation, but HR will have to find a cost-efficient way to get the next sys admin up and running without having to hire a senior right off the bat.

So what are the options?

At first glance, the most logical option would be to train juniors on the job, but this is getting less and less feasible as there is little work left for juniors. Instead,
training needs to be done intensively, via longer courses provided by the employer or an external company. The former would require company trainers, but could end up more tailored to the company’s specific characteristics. In contrast, the latter could be either cheap and generic or expensive and tailored.

The challenge is that you need some form of actual experience with a system before you can automate it. For stand-alone automated systems like an AD with Exchange or VM creation, setting up a course with enough hands-on experience is very doable. But with complete Identity Management automation, automated monthly billing reports or any product that incorporates multiple systems, it’s the intricacies that make the 3 or 5-day courses that we typically see in the IT world almost impossible.

The alternative, to only hire seniors, would probably be too expensive and would diminish the recruiting pond over the years. In the end, training your juniors while under contract, though challenging in various ways, still seems like the best option.

The Conclusion

Train your juniors to know your products and to quickly and correctly figure out how these products should connect. Give them test environments to create things manually and break things while trying to automate it. Give them a tutor and time-managed (!) play time. And most importantly, involve them in breakages and incidents arising from integrations between systems. Yes, your seniors can do it faster, but invest in your Edds so that they can become your future seniors.

Koen Halfwerk gecertificeerd CTT+

Category : Training

Onze consultant Koen Halfwerk is sinds deze week een officiële Comptia Classroom Trainer (CCT+)! Zijn praktijkexamen sloot aan op het theorie examen dat hij in december al haalde.

Koen is hiermee de tweede gecertificeerde trainer binnen Methos, waardoor tweederde van onze mensen voor de klas mogen staan! Koen zal zich vooralsnog richten op de PowerShell Basis cursussen in het midden en zuiden van het land. Jeff is door het hele land actief, en geeft ook PowerShell Advanced cursussen.

Neem contact op voor meer informatie over onze trainingen of onze gecertificeerde mensen.

Koen and Jeff to provide a very hands-on & practical PowerShell training

Category : Uncategorized

Together with our education partner Vijfhart we’re proud to announce that our colleagues Koen and Jeff will provide a 3-days hands-on and practical PowerShell training. This training is based on the ‘Learn PowerShell In A Month Of Lunches’-book and is intended for anyone willing to learn the language.


So, want to learn how to automate the boring stuff and elevate your skills (and possibly your career) to the next level? This training will definitely help you along that path.


This training is NOT based on the Microsoft Official Curriculum, but on the ‘Learn PowerShell in a Month of Lunches’-book. The exercises during the training are based on Koen’s and Jeff’s experiences in the field over the last 10 years. Practical tasks and examples that will give you the knowledge needed to apply PowerShell in your daily work the moment you’ve finished the training. So, awesome for Ops-people, admins and engineers!

Rex to provide a Chocolatey workshop

Tags :

Category : Chocolatey , Training

On the 18th of April, our colleague Rex will provide a Chocolatey workshop at Startel, one of our Education partners.

Yes, this workshop will be held in Dutch.

After completing this workshop, students will be introduced to:

* Simple and advanced scenarios for Chocolatey. You will see that Chocolatey can manage anything software-related when it comes to Windows.
* General Chocolatey use.
* General packaging.
* Customizing package behavior at runtime (package parameters).
* Extension packages.
* Custom packaging templates.
* Setting up an internal Chocolatey Server repository.
* Adding and using internal repositories.
* Reporting.
* Advanced packaging techniques when installers are not friendly to automation.

Rex de Koning to join Methos

Category : Uncategorized

As of the 1st of March Rex de Koning will be joining Methos in the role of Azure Engineer.

We are very happy for him to join as he greatly expands our resources, knowledge and experience.

Rex, welcome to the club!

Talk to Skype for Business

Ever wondered if you could get Skype for Business to respond to Powershell? This post will give you the basics for a script that writes to Skype for Business contacts on your behalf. I started writing code for Lync 2010 and have since then migrated to Skype for Business without any backwards compatibility issues.

While it is possible to have a script running that sets your status to ‘Do not Disturb’ every time a certain person (your manager) tries to talk to you, this would result in a DDos attack on your Skype for Business (SfB) server. There is a way to go on ‘Do not Disturb’ after the first message has been received, which I will shortly explain at the end of this post. In order to make this post as readable as possible, I’ll write the code sequential, without any functions. This way the code reads in the same order as Powershell executes it.

But first some things you need in order to get SfB to acknowledge the existence of Powershell in the first place. Sadly SfB Does not have its own windows SDK, but the Lync SDK works just as well. Inside are a couple of dll’s that we will need. So download de SDK, open it with any archive program like 7-zip. Inside are the following dll’s that we will need:

  • \LYNC..M.L.MODEL.dll
  • \LYNC..M.O.UC.dll
  • \LYNC..M.L.Utilities.dll

Place the dll files in the same folder as your script or working directord and import them:

Import-Module -Name ".\LYNC..M.L.MODEL.dll"
Import-Module -Name ".\LYNC..M.L.C.FRAMEWORK.dll"
Import-Module -Name ".\LYNC..M.o.uc.dll"
Import-Module -Name ".\LYNC..M.L.Utilities.dll"

Next we will make a connection to your local Skype for Business client.

#Try to connect using a bit of dotnet
Try {
$Client = [Microsoft.Lync.Model.LyncClient]::GetClient()
Catch {
Write-Output "Skype for Business client not running"
#check if the client is actually signed in, not just running
If ($Client.State -ne "SignedIn") {
Write-Output "Skype for Business client not signed in"

Next, grab the client status, the contact manager and the status of your contacts:

#get the clients status (put it on 1 line)
$Status_Info = New-Object 'System.Collections.Generic.Dictionary
[Microsoft.Lync.Model.PublishableContactInformationType, object]'
#get all contacts
$contact_man = $Client.self.Contact.ContactManager
#get availability status of all contacts
$contact_stat = $contact_man.GetContactInformation("Availability")

Changing your status

In order to change your Skype for Business status, you have to know what number in $Status_info corresponds to what status (instead of numbers, you could use strings in the switch):

1 {$status = 3000} # "Available"
2 {$status = 6000} # "Busy"
3 {$status = 9000} # "Do Not Disturb"
4 {$status = 12000} # "Be Right Back"
5 {$status = 15000} # "Away"
6 {$status = 15500} # "Off Work"
7 {$status = 18000} # "Appear Offline"

So if you wanted to have a script that changes your Skype for Business status between busy and available, you could do something like this (withouth a switch in this case):

if ($Client.self.Contact.GetContactInformation("Availability") -eq 6500)#for some reason $client always reports 500 more then it wants to receive
$status = 3000 #available}
elseif($Client.self.Contact.GetContactInformation("Availability") -eq 3500)
$status = 6000 #busy
#publish the change to the client
$Status_Info.Add([Microsoft.Lync.Model.PublishableContactInformationType]::Availability, $status)
$Publish = $client.Self.BeginPublishContactInformation($Status_Info, $null$null)

Sending a message

So how about sending a message? Given some prior knowledge about your contact list, or the sip of your contact, you can write a script to talk to any contact or person that you would normally be able to talk to through the GUI.

Finding the addres or name of the SfB contact:

#The name of a group:
#The displaynames in the group:
#the SIP addresses in the group:
# $contact is now the email address of my first contact (group1, contact 1)
$Contact = $Client.ContactManager.Groups[0].getcontactinformation(11)[0]
#what do you want to say:
$message = "Hello World"
#start the conversation:
$Conversation = $Client.ConversationManager.AddConversation()
#add your contact to it
#make the message
$Msg = New-Object "System.Collections.Generic.Dictionary[Microsoft.Lync.Model.Conversation.InstantMessageContentType,String]"
#set the modality to instant message (rather then audio or any other modality)
$Modality = $Conversation.Modalities[1]
#send the message:
$Modality.BeginSendMessage($Msg, $null$Msg)

Now that you have a basic handle on how to talk to Skype via Powershell, go ahead and explore all the possibilities!

The new Az module – Connecting to Azure

With the introduction of the new Az PowerShell module, the merger of the Azure.* and AzureRM.* modules, comes a new way of connecting to Azure.

When we get a list of the available commands to do something with an AzAccount, you’ll end up with the following:

As you can see, there are now Connect-/Disconnect-AzAccount and Login-/Logout-AzAccount cmdlets. So if you want to connect to Azure and use PowerShell cmdlets to manage your environment, which one do you use?

If you use either Connect-AzAccount or Login-AzAccount, you’ll end up with the following message:

For this, one would require user interaction. Would that not negate the whole concept of automation?

One of our customers contacted me with the request if we could automate this. His idea was that we would write something that could read the url and code, utilize a browser and through that automate the login.

Although I love billing customers, I don’t like to bill them unnecessarily. I decided to educate them instead:

The solution is already available

The solution is simply by using the cmdlet the way it is intended to be used. For an interactive environment, you can simply go to that website and fill in the code. When you require the cmdlet to be used in an automated process / script, you can use the cmdlets’ parameters to tweak its behavior so that it works in automation

If you look at help of the cmdlets, you’ll notice that it has quite a few parameters that you can use. Amongst those is the -Credential parameter:

Big fat note:
This approach doesn’t work with Microsoft accounts or accounts that have two-factor authentication enabled.

But what if you’re using an account with Multi Factor Authentication?

Well, let me introduce you to Service Principals and Managed Identities.
Service principals are non-interactive Azure accounts. Like other user accounts, their permissions are managed with Azure Active Directory. By granting a service principal only the permissions it needs, your automation scripts stay secure.

If you want to know how you can create Azure Service Pricipals, take a look here.

Next to the Service Principal, the Connect-AzAccount cmdlet also requires you to provide its application ID, sign-in credentials, and the tenant ID associate with it:

Manage identities are a subset of Service Principals, and have therefor the same constraints.
They are assigned to resources that run in Azure. You can use them for sign-in, and acquire an app-only access token to access other resources. Managed identities are only available on resources running in an Azure cloud.

Working around Azure Tagging Limits – Using JSON formats.

Have you ever ran into the hard-limit in Azure for the amount of tags allowed on asingle resource, or resource group even?
When you work in a large organisation that wants to track everything this mightbe one of the things happening to you.

Let’s dig a bit into the actual limits of tagging currently in azure.
source: https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits

  • Each resource or resource group can have a maximum of 15 tag name/value pairs
  • The tag name is limited to 512 characters
  • The tag value is limited to 256 characters.
  • For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters.
  • Tags can’t be applied to classic resources such as Cloud Services.
  • Tag names can’t contain these characters: <, >, %, &, \, ?, /

So this means we can have a Maximum of 15 tag name/value pairs on a resource/resource group.
This means if you want to tag for example: Owner, Team, Manager, CostCenter, Environment,BackupType, Expirationdate, MaintenanceWindow, etc. You will run out of those tags pretty quickly.

Luckily looking at the rest of the limitations: The tag value is limited to 256 characters! (that is a lot of characters!), and tag names cannot contain a few set of characters.

Since we don’t spot curly braces in the ‘cannotcontain’ list, why not start using JSON in as tag value’s to concatenate tags?

“Team”: “Solution Architects”,
“BackupType”: “FullBackup”,
“Manager”: “Danny den Braver”,
“ExpirationDate”: “None”,
“MaintenanceWindow”: {
“Days”: [
“Hours”: “12:00-20:00”
“Environment”: “Development”,
“Owner”: “Danny den Braver”,
“CostCenter”: “12345”

Now let’sput this into practise on how we could do this leveraging powershell(specifically as I like splatting more than writing native JSON)

Let’sfirst build our hashtable and convert it to JSON

$ServerDetails = @{
Owner = ‘Danny den Braver’
Team = ‘Solution Architects’
Manager = ‘Danny den Braver’
CostCenter = ‘12345’
Environment = ‘Development’
BackupType = ‘FullBackup’
ExpirationDate = ‘None’
MaintenanceWindow = @{
Days = ‘Saturday’,’Sunday’
Hours = ’12:00-20:00′

$ServerDetailsJSON = $ServerDetails | convertto-json

Now we can add it as a tag value to ourenvironment

$Tags = @{
‘ServerDetails’ = $ServerDetailsJSON

Set-AzureRmResourceGroup -Name db-personal-rg-01 -Tag $Tags

Thisis what it will look like inside the portal:

Thisis what it will look like from PowerShell:

Hopefullythis will give you enough room for moving ahead using Tags within Azure.


Category : Uncategorized

Op 6 oktober 2018 heeft Methos een gezellig bedrijfsuitje mogen beleven. De drie medewerkers, en hun partners, hebben zich ternauwernood weten te redden uit de Mission Impossible escaperoom in Scheveningen. Met nog 1,5 minuut te gaan is de wereld gered, en dit keer niet door automatisering!

Onze wereldverbeteraars hebben daarna wel verdiend genoten van een avond Tasty Comedy op de pier. Danny, bedankt voor het organiseren!