Azure Point-To-Site VPN – certificates
Let me set the expectation here, I am not going in depth on how to setup an Azure VPN as it has been referenced in many articles which could take you step by step on hot to configure your VPN tunnel to Azure cloud.
My main concern here are methods available in generating those certificates used in establishing that type of VPN. I have used a self signed certificate which works well in most instances but that could always be replaced by a publicly signed certificate to avoid uploading various root trusted certificates to Azure vNet.
The most common way is to use makecert.exe which comes as part of Windows SDK
Open a command prompt:
makecert.exe -sky exchange -r -n “CN=RootCertName” -pe -a sha1 -len 2048 -ss My
makecert.exe -n “CN=ClientCertName” -pe -sky exchange -m 96 -ss my -in “RootCertName” -is my -a sha1
With the introduction of new version of Powershell 4 with Windows 8.1 and Windows Server 2012 R2, we can now generate the self-signed certificate using a simple command without installing Windows SDK and makecert.exe
Using Powershell, run the following line:
New-SelfSignedCertificate -CertStoreLocation cert:\LocalMachine\My -DnsName CertName -KeyLength 2048 -KeySpec KeyExchange
You can then export the .cer certificate which you can place in your Trusted Root Certification Authorities and upload to Azure.
Both processes work but you will need one of the OS’s highlighted above in order to use the Powershell command, you can install Windows Management Framework but that command wont be available to you on older versions of Windows.
Azure resource re-allocation and Resource Groups
Infrastcuture in the cloud (IaaS) is such an evolving topic from the architectual point of view. As services do evolve and more functionalities get added in order to enable the end user to untilise these services in best forms, complexities do start to add to it.
IaaS require a lot of initial planning to minimise any downtime required to re-allocate services/servers for production (Prod).
If breaking to Azure services started as a proof of concept (PoC) initially and changed suddenly to being the business critical service that your business can’t function without – without the necessary transitional planning then we are on the same page here.
Microsoft Azure does add a lot of value to the business and continuity of its business operations.
In this article I will go over Azure different resources and the way they could be organised for ease of management and billing. Billing is an important topic if you want to understand how your services are being utilised in the cloud or in order to bill each business unit if your business is using the charge back model.
If you have just started building your infrastructure on Azure, ensure your business units use Azure Resource Groups to group their services/servers and that could save you a lot of time in the long run.
The way to move resources between different resource groups are a complex ‘PowerShell driven process’. First you need to understand the limitiation of resource move:
- vNet’s can’t be moved
- Re-allocated Azure resources will retain their source region, even if your destination resource group is in a different region.
- You can’t move a single VM attached to a cloud service, the cloud service and all VM’s attached to it will have to move together.
- From experience, move storage accounts seperately. When I try to move a storage account with the rest of resources I get error (“One resource move request can contain resources of only 1 provider.”) :

- If you would like to migrate the VM to a new vNet then the VM needs to be deleted and reprovisioned on the vNet – the VM will down for that duration.
- If you would like to move the VM to a new storage account, then the downtime will be much greater depending how big the VHD files are and the region. I won’t talk much about this process, you will find it detailed here.
Now we will talk about the interesting part, the move and re-allocation process.
- Download the latest Azure Powershell module (We will be using the latest Azure Resource Management module) as illustrated here
- Login to your subscription using Login-AzureRmAccount
- Get the content of your source resource group on Azure: Get-AzureRmResource
- Feed the output to Move-AzureRmResource
I have written a short script to demonstrate this process (MS Azure Resource Group Management(MS Azure Resource Group Management), I have added comments necessary to each of the steps in the script so you should be able to customize it to your needs.
DB fails to activate on another node in an Exchange 2013 DAG
Posted by Sam in Exchange 2013 on 29 January, 2015
It has been a while since I blogged about Exchange! Last year actually! Time runs by quick ..
Anyways, I wanted to talk about a problem you might face (certainly I have faced recently) in a situation when your DAG members are online but the database fails to activate to a particular node! Back in the days that used to happen if one of your Exchange vital services has stopped, but in this scenario all services were running as normal.
Based on Activation preference on each DB, I wanted to redistribute DB’s between all nodes after a restart. MS has kindly written a beautiful script that could take care of that for you based on a specific DAG. RedistributeActiveDatabases.ps1 which is located under Exchange install directory inside a ‘Script’ folder. This script can take your DAG and assess DB distributions, based on their activation preferences it starts to move the active DB’s to their intended servers.
In my case it failed to move due to some error on the server regarding ‘HighAvailability’ state, Exchange 2013 has introduced a new concept of server component state, which gives a granular control over server components that make up the Exchange server.
Running Get-ServerComponentState -Id ServerName on an Exchange server would show each of the component running and their state, this is very useful in troubleshooting problems with Exchange before even digging deep into configuration.
In order to bring server components online you could run the following PowerShell command:
Set-ServerComponentState -id ServerName -Component ComponentName -State StateName -Requester FunctionName
Note, if components were brought online by multiple requesters then you would need to issue the ‘Active’ command state under both these requesters in order for the component to turn to active.
There is a great article written by the Exchange team which goes in great depth explaining the principle behind it and the advantages gained by the administrator.
InTune or not to InTune … is it a Question?
Posted by Sam in InTune, MDM, SCCM 2012R2 on 23 September, 2014
I am currently working for a client designing a solution for MDM (Mobile Device Management). Most customers look for an easy to use solution so it could be picked up and managed appropriately by their internal IT staff.
There are many solutions on the market, like AirWatch, Good and InTune plus many more that I didn’t mention, each have their advantages and disadvantages. Anyways, I am not writing a product feature review so I won’t dive into a comparison between the vendors.
For this customer we have settled for InTune due to cost and integration with existing systems like Microsoft SCCM 2012 R2.
InTune does provide good MDM solution in the cloud for those who want to migrate away from their on-premis private cloud or create a hybrid cloud. Either way it’s a good step forward in the cloud which would open up more possibilities inside MS Office 365 hosting.
If you have implemented MS SCCM 2012 R2 on-premis, it is recommended to integrate SCCM to manage your mobile devices with InTune. Combined they could provide a very powerful solution to manage settings on the phone down to the application level.
Microsoft has a very good article on application control using SCCM and InTune http://technet.microsoft.com/en-us/library/dn771706.aspx
If you have InTune and want to integrate SCCM to your solution then it’s achievable even though you have switched on InTune as your MDM Management Authority. A call to Microsoft support could start the process in that transition, this process is disruptive and it would impact all phones enrolled on InTune during that transition. Having SCCM as MDM Management Authority is one way road, so you won’t be able to flip back to having InTune as your MDM Management Authority.
Report on Exchange 2010 Server RU level
I found a nice script written to gather your environment Exchange environment RU level.
Couldn’t open backup file handle while performing Exchange DB seed via Powershell
Posted by Sam in DAG, Exchange 2010, Poweshell on 4 November, 2013
The error above is the outcome of running Udate-MailboxDatabaseCopyStatus -Identity DB_NAME -DeleteExistingFiles and the DB status went to Failed and Suspended. The reason is that the backup was kicked off and the handle for the DB was no longer available.
You could check the status of the backup on the DB by running Get-MailboxDatabaseCopyStatus -Identity DB_NAME | fl *backup*
Two ways to get over this, either to wait for the backup to finish or to reboot the server and stop the backup and that should fix the issue, the seeding process needs to start from scratch.
Network has no associated network protocol profile
Cannot initialize property ‘vami.netmask0.vSphere_Management_Assistant_(vMA)’ Network has no associated network protocol.
This is a message we have received while powering up our vMA template. The reason for this is that there isn’t an IP pool defined for the network adapter where vMA was plugged in to.
The solution is simple, click on your Cluster node within your vSphere client – click on IP Pools tab and then create an IP pool associating it with your physical NIC. This would assign an identity to your network adapter.
After creating the IP pool, you should be able to power on your appliance.
Move database between two DAG’s
Posted by Sam in DAG, Exchange 2010 on 15 October, 2013
Ever wondered how to move a whole database from one DAG to another without going through mailbox by mailbox migrations? For what reason? (you might ask)
We had to split two remote sites to two separate DAG’s to limit DAG dependability on link between them due to some reliability issues. This has caused us problems in the past and lots of headaches to keep users happy.
The original setup consisted of two multi-role Exchange 2010 servers in a stretched DAG with File Share Witness (FSW) being in Site A. If the link goes down or becomes intermittent then all DB’s would fail over to Site A and Site B will have no or very limited access to their mailboxes (See figure below).
What we needed to achieve is to have both sites working even during link failures, which isn’t possible with one DAG, hence the proposal of two DAG’s. Using the current design above we have managed to split users according to site into their own DB’s and activated those DB’s on their respective sites.
In order to create a second DAG and migrate DB’s across to new DAG, we followed following steps:
1. Make sure all DB’s are in sync (healthy)
2. Ensure all required DB’s for second DAG are activated on the swing server (i.e. in our diagram MBX2).
3. Remove any DB copies that tie that server with MBX1 before attempting to evict it off DAG1.
4. ENSURE YOU ARE REPLICATING AD CHANGES AT EACH STEP! OTHERWISE YOU WOULD HAVE SPLIT BRAIN ISSUE AND CAUSE YOUR DB TO FAIL. LISTEN TO ME WHEN I SAY I AM TELLING YOU FROM ‘EXPERIENCE‘ :)
5. Evict MBX2 from DAG1 ( Under Organization Configuration – Mailbox – Database Availability Group – right click on the DAG you want to evict MBX2 from and click on Manage Database Availability Group Membership).
6. Select MBX2 (in our case) and click on X to remove it.
7. Now, now right click on the DAG that you want to join and follow the same steps above to add MBX2 to the DAG.
8. Setup your DB copies.
This process should be seamless to end users with no interruptions to service, just make sure AD topology is updated at each step to avoid any DB downtime.
I hope this article will help you to save some time and effort in regard to DB re-allocation rather than mailbox-by-mailbox migrations. Of course mailbox-by-mailbox replications has it’s benefits, if you have lots of white space which you want to recover rather than dismounting the DB and running an offline defrag which is time consuming and it requires downtime depending on the size of the database.
Soft Deleted Mailbox in Exchange 2010 (Continued)
Posted by Sam in Exchange 2010, Poweshell on 12 July, 2013
In my previous post I have talked about what happens behind the scenes when you disconnect a mailbox via EMC. Soft deleted mailboxes stay on the system for the whole retention period (by default 30 days) hence they still utilise space within Exchange DB, if you are running on low disk space then this might become an issue.
Luckily there is a solution to this issue, using Remove-StoreMailbox command in Powershell, follow the solution in this MS article http://technet.microsoft.com/en-us/library/gg181092(v=exchg.141).aspx
Soft Deleted Mailbox in Exchange 2010
Posted by Sam in Exchange 2010, Poweshell on 17 June, 2013
Have you ever wondered why there are so many disconnected mailboxes? That has flagged a big security alert in our firm, especially after finding out how IT staff used to disable user accounts! By disabling a mailbox, you are actually detaching that mailbox form it’s AD object, this orphaned mailbox is prone to deletion according to your Exchange mailbox retention policy (by default 30 days!).
We have also found few other mailboxes for active users but they are sitting in Disconnected Mailbox, by running the command:
Get-MailboxStatistics -Server ServerName | where {$_.DisconnectReason -eq ‘SoftDeleted’}
The result would show user display name for those mailboxes that had moved from one DB to another. Exchange would mark the source mailbox as SoftDeleted rather than the default Disabled – a mailbox gets flagged as Disabled when disabling mailboxes using the Disable command within the MS Exchange GUI or Disable-Mailbox via Powershell.


