In part 1 of this article I have gone through creating Azure Applications Gateways (AGW) using Powershell which is a powerful way of deploying resources on Azure, using recursive functions and methods you could build a complex solution in few lines. Unlike Powershell, JSON is a static solution. It gives you a way of creating a baseline deployment, I still haven’t found a way of controlling sub-resources (which isn’t governed by the JSON Copy() function).
In this article I will go over the same deployment using a JSON template, this only serve as a reference point for your deployment. I will use the same Azure AGW configuration I used in part 1 to keep both deployments consistent.
When building Azure AGW, same prerequisites apply to the JSON method. There is a subtle difference which I will go over when I get to that point.
So this time I have prepared a Visio diagram to simplify our deployment and understand what we are trying to achieve – see below.
The diagram shows a Web server hosting four websites running a combination of HTTP HTTPS with authentication (using Basic authentication) or Anonymous access.
The only thing different in comparison to previous Powershell deployment is the steps we take to process SSL certificates and passing them over to the JSON template.
Azure Application Gateways is a layer 7 reverse proxy service offered as a PaaS to general public. It supports SSL offloading, which means you can terminate your SSL connection at the Application Gateway and connect to the backend server using HTTP traffic or initiate a new SSL connection to your backend service.
This is all well and good, simple and painless if you have a single backend server with a single website. The complexity of the solution increases as the backend start leveraging more of the IIS functionalities such as Windows/NTLM authentication, SNI and host headers or various SSL certificates used for each sub-site (if you have multiple sites running on the same IIS server).
Before even starting to look at designing your Azure Application Gateway, there are few guidelines you will need to follow:
- You should have an empty default site.
- If using both HTTP/HTTPS protocols on any of the sub-sites, the default website should be listening on both 80 and 443.
- In the case of HTTPS the default site will need to be loaded with a single SSL certificate that will primarily be used by the Application Gateway to authenticate against the server.
- Not running SNI on default website.
- If you are running NTLM or Windows authentication on any of the sites (except form based authentication) then you will need a site/page that allow anonymous authentication to be used for Application Gateway custom probe.
- Use IP address for the backend pool rather than FQDN.
The above will save you a lot of hassle while implementing and configuring your Application Gateway to work with your backend web server.
Microsoft have fixed few issues we were experiencing recently with Application Gateways around SSL and custom probes.
There are two ways available to deploy an Application Gateway, Powershell or JSON template. The latter is preferable to ensure consistency at each deployment. This article is in two parts, in this article I will be using Powershell to deploy an Application Gateway.
- SSL private key in PFX format for all sites using SSL
- SSL public key in CER format for default site
- IP address of the backend web server
- Front and backend listening port
- Site/page with anonymous access if requiring authentication
Powershell code below would deploy an Application Gateway listening on two ports (80,443). The backend consists of four sites with SNI and host headers enabled, two sites run under port 80, one of them require basic authentication. Another two sites run under port 443 bound with self-signed SSL cert for testing, one of the sites has basic authentication turned on. This would test the four common scenarios of a typical deployment.
To be continued …..
Automation has become a large part of any Ops team work stream. It reduces repetitive work and introduces a clear and effective method of ensuring consistency across your platforms, server estate and businesses.
In this article I will go through Azure Automation, especially automation of VM creation and joining the domain. This would lift some of the load of IT to provision and join those VM’s to the domain manually and allowing developers to take a much agile approach.
The automation process is based on Azure Runbook, utilising Powershell workflow and Runbook assets in fully automating the whole process.
You also should have a startup image (i.e. golden image) sysprep’ed and uploaded to a known and accessible storage account in that subscription. You need to ensure that the image was generalised and shutdown. This type of deployment will ensure all your applications and settings are packaged as part of that deployment – that’s if you don’t want to use Microsoft own images. You need to make sure you are covered by MSDN licenses if you are uploading client images and ensure your server image is appropriately licensed as well, plus any application you package as part of that image.
You are probably keen to get down to business …. few things you have to make sure of to ensure this whole process work for you. This automation task is based on creating an Azure ARM VM, you will need three credential assets created under your automation account which will be used with your script:
- Asset for local VM admin
- Asset with Azure co-admin rights – or at least VM creation rights
- Asset with domain admin rights
The vhd image of your packaged and generlised VM needs to be uploaded to a storage account. Make a note of that storage account, any VM created of that image will need to reside on same storage account. If you need to create a VM on a different storage account then you will need to copy that image across first.
The script is split to two parts, the main body which is the Runbook – Powershell workflow
On Azure, create a Runbook called AzureVMDomainJoin and paste the code below:
Note: Change $vmImageUri string in the above code to reflect your environment.
In order to run this Runbook, ensure it’s published on Azure. Then use the script below to activate the Runbook from your client machine – need to have AzureRm Powershell module installed to be able to run it.
Make sure you fill out all variables according to your environment. This script will only utilise what’s already created on Azure, like storage accounts, networks and subnets. If you want a new resource group created or a new network/subnet then pre-stage it on Azure first before you run this script.
Happy scripting ….. 🙂
I have just stumbled across an article in regards to moving Hyper-V cluster with production load to a VMM 2012/2012 R2 environment.
VMM will (as part of adding the new cluster) add MIPO device list to each of the hosts participating in the cluster, which would cause a loss of connectivity to storage.
Edit: After doing a bit of investigation to MPIO, things to bare in mind:
- if you are adding an existing Hyper-V cluster to VMM and one or more of your hosts failed to join, drain one of those ‘Pending’ nodes and work on it separately. This would avoid any impact on production load.
- Remove any MPIO settings under Control Panel -MPIO as shown below: (Don’t reboot).
3. Try adding the Hyper-V host to VMM.
You could query connected SAN (using mpclaim -e) and display it as vendor-product id strings which you could add via the MPIO GUI.
As you can see there are spaces added to the end of the string to make it compatible. This is necessary, and adding to VMM would fail otherwise.
We have recently started going through a major project of migrating Hyper-V VM’s in bulk from one cluster to another (same processor family) using “Shared Nothing Live Migration”. The issues we started having when two VM’s share the same .vhdx disk name moving to the same Hpyer-V host, you would receive similar error to the one below:
There are two ways to resolve this:
- Move to a different CSV location.
- Move to the same location under a new folder structure.
1 parallel subtasks failed during execution.
Unable to connect to the VMM database because of a general database failure.
SQL error code: 547
Ensure that the SQL Server is running and configured correctly, then try the operation again.
We have received this error while removing a Hyper-v cluster out of VMM 2012 R2. Even trying to reinstall VMM agent fails manually. Trying both Remove-SCVMHost and Remove-SCVMHostCluster with -Force also failed miserably!
I have found a good article that takes you through these steps of removing hosts/cluster from the DB end, just make sure you have a backup before you start.
You can find the article here.
Let me set the expectation here, I am not going in depth on how to setup an Azure VPN as it has been referenced in many articles which could take you step by step on hot to configure your VPN tunnel to Azure cloud.
My main concern here are methods available in generating those certificates used in establishing that type of VPN. I have used a self signed certificate which works well in most instances but that could always be replaced by a publicly signed certificate to avoid uploading various root trusted certificates to Azure vNet.
The most common way is to use makecert.exe which comes as part of Windows SDK
Open a command prompt:
makecert.exe -sky exchange -r -n “CN=RootCertName” -pe -a sha1 -len 2048 -ss My
makecert.exe -n “CN=ClientCertName” -pe -sky exchange -m 96 -ss my -in “RootCertName” -is my -a sha1
With the introduction of new version of Powershell 4 with Windows 8.1 and Windows Server 2012 R2, we can now generate the self-signed certificate using a simple command without installing Windows SDK and makecert.exe
Using Powershell, run the following line:
New-SelfSignedCertificate -CertStoreLocation cert:\LocalMachine\My -DnsName CertName -KeyLength 2048 -KeySpec KeyExchange
You can then export the .cer certificate which you can place in your Trusted Root Certification Authorities and upload to Azure.
Both processes work but you will need one of the OS’s highlighted above in order to use the Powershell command, you can install Windows Management Framework but that command wont be available to you on older versions of Windows.
Infrastcuture in the cloud (IaaS) is such an evolving topic from the architectual point of view. As services do evolve and more functionalities get added in order to enable the end user to untilise these services in best forms, complexities do start to add to it.
IaaS require a lot of initial planning to minimise any downtime required to re-allocate services/servers for production (Prod).
If breaking to Azure services started as a proof of concept (PoC) initially and changed suddenly to being the business critical service that your business can’t function without – without the necessary transitional planning then we are on the same page here.
Microsoft Azure does add a lot of value to the business and continuity of its business operations.
In this article I will go over Azure different resources and the way they could be organised for ease of management and billing. Billing is an important topic if you want to understand how your services are being utilised in the cloud or in order to bill each business unit if your business is using the charge back model.
If you have just started building your infrastructure on Azure, ensure your business units use Azure Resource Groups to group their services/servers and that could save you a lot of time in the long run.
The way to move resources between different resource groups are a complex ‘PowerShell driven process’. First you need to understand the limitiation of resource move:
- vNet’s can’t be moved
- Re-allocated Azure resources will retain their source region, even if your destination resource group is in a different region.
- You can’t move a single VM attached to a cloud service, the cloud service and all VM’s attached to it will have to move together.
- From experience, move storage accounts seperately. When I try to move a storage account with the rest of resources I get error (“One resource move request can contain resources of only 1 provider.”) :
- If you would like to migrate the VM to a new vNet then the VM needs to be deleted and reprovisioned on the vNet – the VM will down for that duration.
- If you would like to move the VM to a new storage account, then the downtime will be much greater depending how big the VHD files are and the region. I won’t talk much about this process, you will find it detailed here.
Now we will talk about the interesting part, the move and re-allocation process.
- Download the latest Azure Powershell module (We will be using the latest Azure Resource Management module) as illustrated here
- Login to your subscription using Login-AzureRmAccount
- Get the content of your source resource group on Azure: Get-AzureRmResource
- Feed the output to Move-AzureRmResource
I have written a short script to demonstrate this process (MS Azure Resource Group Management(MS Azure Resource Group Management), I have added comments necessary to each of the steps in the script so you should be able to customize it to your needs.
Anyways, I wanted to talk about a problem you might face (certainly I have faced recently) in a situation when your DAG members are online but the database fails to activate to a particular node! Back in the days that used to happen if one of your Exchange vital services has stopped, but in this scenario all services were running as normal.
Based on Activation preference on each DB, I wanted to redistribute DB’s between all nodes after a restart. MS has kindly written a beautiful script that could take care of that for you based on a specific DAG. RedistributeActiveDatabases.ps1 which is located under Exchange install directory inside a ‘Script’ folder. This script can take your DAG and assess DB distributions, based on their activation preferences it starts to move the active DB’s to their intended servers.
In my case it failed to move due to some error on the server regarding ‘HighAvailability’ state, Exchange 2013 has introduced a new concept of server component state, which gives a granular control over server components that make up the Exchange server.
Running Get-ServerComponentState -Id ServerName on an Exchange server would show each of the component running and their state, this is very useful in troubleshooting problems with Exchange before even digging deep into configuration.
In order to bring server components online you could run the following PowerShell command:
Set-ServerComponentState -id ServerName -Component ComponentName -State StateName -Requester FunctionName
Note, if components were brought online by multiple requesters then you would need to issue the ‘Active’ command state under both these requesters in order for the component to turn to active.
There is a great article written by the Exchange team which goes in great depth explaining the principle behind it and the advantages gained by the administrator.
I am currently working for a client designing a solution for MDM (Mobile Device Management). Most customers look for an easy to use solution so it could be picked up and managed appropriately by their internal IT staff.
There are many solutions on the market, like AirWatch, Good and InTune plus many more that I didn’t mention, each have their advantages and disadvantages. Anyways, I am not writing a product feature review so I won’t dive into a comparison between the vendors.
For this customer we have settled for InTune due to cost and integration with existing systems like Microsoft SCCM 2012 R2.
InTune does provide good MDM solution in the cloud for those who want to migrate away from their on-premis private cloud or create a hybrid cloud. Either way it’s a good step forward in the cloud which would open up more possibilities inside MS Office 365 hosting.
If you have implemented MS SCCM 2012 R2 on-premis, it is recommended to integrate SCCM to manage your mobile devices with InTune. Combined they could provide a very powerful solution to manage settings on the phone down to the application level.
Microsoft has a very good article on application control using SCCM and InTune http://technet.microsoft.com/en-us/library/dn771706.aspx
If you have InTune and want to integrate SCCM to your solution then it’s achievable even though you have switched on InTune as your MDM Management Authority. A call to Microsoft support could start the process in that transition, this process is disruptive and it would impact all phones enrolled on InTune during that transition. Having SCCM as MDM Management Authority is one way road, so you won’t be able to flip back to having InTune as your MDM Management Authority.