Mike Laurencelle

I'm a SharePoint & Server Systems Administrator for Sears Home Improvement Products, headquartered in sunny Longwood, Florida. My primary functions revolve around SharePoint and Virtualization technologies.

I've been in the IT industry now for about 18 years. For me, IT is more than a job to make a living, more than a career to call my own. It's my passion. I am a self proclaimed geek and have interest in all things technology. I can't imagine being in any other field - I absolutely love what I do.

Making sure Virtual Machines have time to save before host shuts down

0

Posted on : 12:33 AM | By : Mike Laurencelle | In : , ,

By default, Windows is set to wait 20 seconds for a service to gracefully shut down before it is terminated during a host system shutdown. While in most cases this isn't a big deal, it is a bit of an issue when you are running Virtual Server or Hyper-V and need to wait for all the VMs to enter a saved state before the host shuts down.

Typically, when a Virtual Server or Hyper-V host OS enters shutdown mode and it sends the Stop command to the Virtual Server or Hyper-V services, it will attempt to place all guest VMs into a saved state, essentially freezing whatever they are doing at that moment, and saves the memory to disk so it can be rapidly resumed when the host comes back up (Think Hibernation). Until this is completed, the Virtual Server or Hyper-V services will not enter the stopped state. However, depending on how much memory is allocated to a guest VM that needs to be written to disk or how many guest VMs are entering a saved state and competing for disk I/O, this process may (and probably will) take more than 20 seconds. What happens if the guest VMs can't be saved in time? Think of the effect as "pulling the plug" on the guest VM. It is shut off ungracefully and the contents of memory are lost much like what happens if you pull the plug out of the back of a running server.

Thankfully, there is a way to lengthen the amount of time Windows will wait for services to enter the stopped state before ending them ungracefully. This is done by opening up trusty RegEdit and navigating to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control where you will find WaitToKillServiceTimeout set to a default of 20000 milliseconds. Double click on WaitToKillServiceTimeout and change the value to a larger number, such as 180000 (3 minutes) and then click OK and close RegEdit. This will tell Windows to wait for 3 minutes for the service to enter the Stopped state before ending it ungracefully. This should give your guest VMs plenty of time to enter the Saved state before a host system shutdown.

NOTE: This setting affects ALL services. So, if the Virtual Server or Hyper-V services have finished stopping but another services is hanging, Windows will wait for the time you specified before killing the service. Keep this in mind if you wonder why Windows isn’t shutting down even though Virtual Server or Hyper-V are all stopped.

Creating an NLB Cluster using Hyper-V

7

Posted on : 2:00 PM | By : Mike Laurencelle | In : ,

I was trying to create my first Hyper-V VM based NLB cluster and found it is somewhat different from doing the same on either hardware or with Virtual Server 2005. So, I thought I’d share my experience and what worked for me in the hopes that it will help someone else avoid the issues I faced.


I was getting very frustrated because I kept creating the cluster only to find that I couldn’t ping the cluster address that resulted and, in many cases, could ping only one of the two NICs that were attached to the VMs. I tried re-creating the cluster, removing and re-adding the Virtual NICs, repairing the NICs, removing all traces of them in the registry and re-adding them – nothing worked.

After continued troubleshooting I noticed that only one of the two Virtual NICs was showing up with a MAC address assigned. I checked the VM properties to get the appropriate MAC address assigned and then I was able to ping both Virtual NICs. Hurray! Problem solved! Not quite…

As soon as I created the cluster, I noticed I couldn’t ping the public Virtual NIC IP again or the NLB Cluster IP. Banging the head against my desk must have cleared the cobwebs because it was then that it occurred to me what the problem was. When you create an NLB cluster with two NICs using unicast, as part of the cluster creation process, the MAC address of the public NIC is replaced with a new MAC address that the NLB cluster shares. Because of this and the new way that Hyper-V and it’s virtual switches work, this causes a problem and Hyper-V needs to be made aware of the new MAC address.

Here are the steps to get this working below.    

1.       Create your two separate VMs, either on the same host or on separate hosts for additional redundancy. Make sure that both VMs have TWO Virtual Network Adapters. 

2.       Find the MAC address that has been dynamically assigned to each of the two virtual NICs on each VM.
 

3.       You will find that, in the OS of the VM, that one of the two NICs has the MAC address appropriately defined but the other does not. Make sure that both have their MAC address correctly defined. 
a.     Open the properties for the Network Adapter in the Guest VM’s OS.
b.     Click the Configure button and click on the Advanced tab. 
c.     Set the Value to the MAC address Hyper-V assigned that you looked up in Step 2. 
d.     Click OK to save your changes. 

4.       Create your NLB cluster like usual, using Unicast made, which will replace the individual MAC addresses of the public NICs on each node with the NLB cluster’s shared MAC. During the creation of the NLB cluster, you may receive an error that all properties could not be assigned. This would be because the MAC gets changed before the IP address is assigned. So, you will have to manually add the cluster IP address in the TCP/IP Advanced Properties for that NIC. 
a.       Click the Properties button for TCP/IPv4.
b.     Click the Advanced button. 
c.     Click the Add button. 

d.     Enter the NLB cluster’s shared IP address & subnet mask. 

e.     Click OK 3 times to save changes. 

5.       Now it’s time for to get it all working by setting the NLB cluster’s MAC as the MAC assigned to that Virtual NIC in Hyper-V. This will need to be done for the public Virtual NIC on both nodes of the NLB cluster. You can find this MAC by looking at the properties of the NLB cluster you just created.
a.     
b.     

6.       Now, you’ll need to shut down both nodes of the NLB cluster because you can’t change the MAC address settings for the VM while it is turned on. 

7.       Once they are turned off, go into the VM settings and select the Virtual NIC that is the public NIC for the NLB cluster and change from a dynamic MAC to a static MAC to match that of the NLB cluster. 


Voila! Once you turn the VM’s back on, the NLB cluster configuration will be complete. Both nodes will show as converged in the NLB manager, and all three IP addresses (public, private, and cluster) will be pingable from both nodes and from other computers on the network. 

As a side note, from some research I did, this apparently isn’t a problem if you use the Legacy Network Adapters available in Hyper-V instead of the standards Network Adapters that are used by default. However, these Legacy Network Adapters do not work properly in a x64 OS and are not supported. 

VLAN Tagging to the rescue!

1

Posted on : 11:02 PM | By : Mike Laurencelle | In :

We’ve recently been running into a problem with our virtualization infrastructure that I found a solution to in Microsoft Hyper-V: VLAN Tagging.

For our main virtualization infrastructure, we are using HP C-class Blades with 4 NICs, each connected to a different VLAN. These servers are set up as a cluster and this has significantly limited our abilities to virtualize more servers than we have, due to the limited number of VLANs we can connect the cluster to for the guest VMs.

Enter Hyper-V with the ability to do VLAN tagging at the VM level. Now, I can trunk the four connections together to provide redundancy and additional throughput for my bandwidth hungry VMs, assign them to multiple VLANs, and have the VMs tag the VLAN they belong on. Sounds simple but obviously there is a lot of planning and configuration that goes into this, especially in a clustered & complex environment such as ours.

I will be posting a follow-up post with step-by-step instructions for how to do this on a more basic level, but the fundamentals will apply to more complex scenarios.