New Touch Screen system

Our Department is located on two separate floors (4 and 5) in the ICT building. There’s a room map on each floor. This map contains only room numbers and no person names.  See Fig 1.


Fig 1. Room map for the 5:th floor (and some updated touch info Smile )

The problem with this map is that you only have a room number and no idea what person (name) you are looking for.

In the past we’ve had a printed A4 paper next to this room map which included both person name and room number (fig 2 below). Whenever the personnel (list) changed on our webpage, “the system” printed a new paper. This was done automatically by a script in our CMS (Department webpage). The personnel listing includes BOTH room number and person name. The negative side of this system is that someone has to (manually) replace the piece of paper in the corridor every now and then. It tends to be forgotten…


Fig 2. Old school

Anyways, we decided to enter the touch screen area and try out an electronic version of this map with both room numbers and person names. Actually this new system includes more than just room numbers and person names. It includes all the details about a person that is available on the Departments homepage (a picture, an e-mail address and a phone number). We also have a “You are here”-sign and a red dot which mark the persons room location on the map (see Fig 5). As we already have this listing electronically on our webpage, only some small tweaks were needed for the touch layout. Thanks to my colleague for the html / coding part 🙂 Now let’s look the project in more detail.



Acer 5600 U, All in One PC.

We had a look at many different brands and models, and this one had the looks. It was also one of the wall mountable models and the price was right. We decided to go with the cheapest model of the 5600 series because even those deliver good performance. We bought one for the fourth floor and one for the fifth floor. 



  • Windows 8 originally
  • Replaced it with our Windows 7 image (no need for Windows 8 in our case)
  • Tweaked the OS a little bit:
    • Added a local user named “kiosk”. This user logs on automatically when the computer starts (Sysinternals Autologon).
    • Internet Explorer runs at system startup. It runs in “kiosk mode”.
      • This is our main “touch interface”. It’s basically a webpage in full screen.
      • iexplore.exe -k
    • Disabled some touch functions, for example right click in Internet Explorer. Also disabled pinch zoom.
    • Disabled Bluetooth
    • Disabled USB ports
    • Disabled Wireless interface
    • Changed Power Options/Choose what the power buttons do to “Nothing”. (No accidental power off when people are messing around…)
    • Installed TightVNC Server for easier remote access
    • Enabled Concurrent Sessions so you can RDP into the machine without disrupting the current (kiosk) user
    • Monitor should switch off during the night and switch on automatically in the morning. (Now done, see Fig 3.)
  • Keyboard and mouse are stored in my room Smile


Fig 3. Computer sleep/wake

Steps for sleep/wake:

  • Create two new power plans (Control panel/Power options/Create a power plan):
    • For example “Monitor Always ON” (balanced with Turn off the display and Put the computer to sleep both set at never
    • For example “Monitor OFF” (balanced with Turn off the display put to never and Put the computer to sleep set to 1 minute
  • Use Task Scheduler  to activate the power plans according to your needs. My computer goes to sleep every day at 19.00 and wakes up at 07.15 in the morning (See Fig 3).
    • Use powercfg –list to list existing power schemes
    • Use powercfg -setactive <GUID> to set your default scheme
  • Fig 3 shows my schedules. I have one called “Monitor OFF” and one called “Monitor ON”. They use powercfg –setactive <GUID> to switch between the two schemes.
  • Works as it should… almost. I noticed that the monitor/computer wasn’t switched on in the morning.
  • Had a look at the system logs in Event Viewer. Lots of stuff with source from “Power-Troubleshooter”. Further investigation shows:
    • Sleep Time: ‎2013‎-‎09‎-‎02T02:00:59.919249500Z
      Wake Time: ‎2013‎-‎09‎-‎02T03:06:58.670908100Z

      Wake Source: Device -Realtek PCIe GBE Family Controller

    • Apparently the computer woke up every now and then without me even knowing it. I did some googling and this has to do with the power options of the network card itself. Long story short: have a look at the “only allow magic packet to wake the computer” (Fig 4. below)

Allow this device to wake the computer

           Fig 4. Power Management options for the network adapter

  • In my case, the “only allow a magic packet to wake the computer” was unchecked, which caused the computer to wake up with almost every sent Ethernet packet out there. Not good. Source:
  • The Monitor OFF sleep schedule works fine. However, the computer didn’t wake up at 07.15. Well, powercfg can’t run if the computer is sleeping, right?
  • Solution: Put a tick in “only allow a magic packet to wake the computer” and have another computer send a “magic packet” to wake this computer.
  • Installed Wake on Lan Command Line utility on on of our servers which sends a wake-up call in the morning.
    • Made a batch file which included two lines to wake up both our touch screen computers (example in the url above)
    • Had our server run a scheduled task with the batch file at 07.14.30. This made the touch screen computers wake up 30 seconds before they changed their power profile to “Always ON”.
  • I now feel like MacGyver but at least it’s working 🙂


So far so good. Seems “secure enough”, no complains either (one week has passed). I probably forgot to add something to the list above though…


System in action

Below are some pictures showing the the system in action. Sorry for the crappy quality.

display3 vs. display4

Fig 5. New vs. old. Which one do you prefer? Smile

As you can see below, there’s also a  bit more information available on the start page than just the personnel listing. This is mostly to fill up the page with other (useful) stuff when not displaying a room map. When you click (or perhaps I should say “touch”) a person on the list, the layout will look like the one in Fig 5 above.



Fig 6. Main page

The main page (Fig 6) is by default displaying some of the Departments projects and todays lunch in our restaurant. None of the elements (to the right) are clickable as this would open up a new browser window and so on. We only wanted the personnel listing clickable (at least for now) so we try to keep it as tidy as possible. You can use the QR codes to retrieve the information for your mobile phone however.

This is just one way of using the touch screen pc. We’re currently open to new ideas and we are already trying to figure out what we could do with the embedded webcam Smile

Deploying Windows 7/8 with Microsoft Deployment Toolkit (MDT) 2012 Update 1 and Windows Deployment Services (WDS)

This document is a bit dated, I wrote it back in November 2012 (with some small updates later on).



Lab environment


I started out in a lab environment and moved over to production environment when everything was working as expected. My testing environment was (is) VMware Workstation.

I have to say that all the guides I found on the Internet were a bit confusing, but I finally got it working the way it should. I’ll try to recap my steps, and hopefully it won’t be as confusing for others trying to build a similar environment.


I basically followed these steps:


· Installed Windows Server 2008 R2 Datacenter in a Virtual Machine.

· Configured the Virtual Machine:

o   Network as host-only with a static IP-address.

o   Added a second virtual hard drive. It’s best practice to have the deployment share on a different drive/partition.

· Installed  the necessary software:

o   .NET Framework 3.5 from Server Manager, Features

o   Windows Automated Installation Kit (AIK) v. 3.0 (Update: please use Windows ADK)

o   Microsoft Deployment Toolkit (MDT) 2012 Update 1

· Installed necessary Server Roles for WDS:

o   Active Directory Domain Services Server Role

o   DNS Server Role (configuration documentation not included for lab environment)

o   DHCP Server Role (configuration documentation not included for lab environment)

· Copied a plain Windows 7 Enterprise 64-bit image to the server

· Copied  our production .wim-image to the server (also Windows 7 Enterprise 64-bit)





Now the server was ready for configuring the most important part, Microsoft Deployment Toolkit (MDT) 2012 Update 1. As I said before, many guides are available on the Internet but they can be confusing. One guide that helped me was:

Thanks to the author for this one. Kept me going without giving up Smile

Anyways, I’ll try to recap my steps:


· Created a new Deployment share, D:\DeploymentShare$ in my case.

o   Disabled every step in options (wizard panes)

· You’ll end up with a very basic vanilla Deployment Share. This has to be heavily customized for your own environment.

· Add Operating System(s) either from Source (DVD) or from an image file (.wim). There are a couple of questions to answer during the OS import, but they can be googled if not self-explanatory.



Fig 1. Adding Operating Systems in MDT 2012


· Above is a screenshot with two Operating Systems added. This is enough for my deployment. I used an old domain-image, which I installed in a virtual machine. I updated all programs and added some new ones. I then sysprep’ed the virual machine and made an image with ImageX. (Took a snapshot before this so it’s easy to revert). You can use other techniques to sysprep and capture (MDT’s own Task Sequence for example), but I used imageX because I’ve done it before. You now have your “Golden Image”, which can be deployed straight away or modified by adding Applications or injecting drivers etc.

· Much of the important settings are available when you right click the deployment share and choose properties. Fig 2. shows a screenshot of the default rules for the deployment share. Much can (and should) be changed. I’m not going through every setting here as you can find help online, for example:



                Fig 2. Default Rules for the Deployment Share. 


Screenshots are better than text, so here are my rules after modifications. Almost all dialogs are bypassed, except machine name and domain. I also configured logging, as it’s nice to know if something went wrong (SLShare=\\WDS\Logs)



                 Fig 3. CustomSettings.ini (Rules)





Time to move along to the wds-part. I’ve already installed the wds server role so now it’s time to configure it.


· Start wds, right-click your server and choose configure server.


· The instructions will tell you to add the default images (Install.wim and Boot.wim) that are included in the Windows 7 installation DVD (in the \Sources folder). This is where it gets a bit confusing (at least for me). DO NOT add the install image, JUST the boot image. This way, you just boot from the wds-server, and can point the installation to use an install image from your mdt share.


· Go back to MDT and choose properties on your Deployment Share. Go to the Rules tab. Click Edit Bootstrap.ini, down in the right corner. Edit the file according to your environment. Here’s a screenshot of my customized file:  




    Fig 4. Bootstrap.ini                     


· Every time you change a setting in Rules or Bootstrap.ini in MDT, you’ll have to UPDATE THE DEPLOYMENT SHARE (right click deployment share). This wasn’t that well documented.

Also, if you make changes to the Boot Image configuration (Bootstrap.ini), you will HAVE TO REPLACE the Lite Touch Windows PE (x64) boot image in WDS (right-click the current boot image and choose replace) after you have updated the deployment share. Otherwise wds will boot with the old boot image. Choose the file from your Deplyment Share\Boot\ LiteTouchPE_x86.wim. 


            Fig 5. WDS




Back to MDT – Task Sequences


Anyways, back to MDT. Now it’s time to make some Task Sequences which basically tells MDT what to do before, during and after Deployment. This is where the magic happens.  



 Fig 6. MDT, Task Sequences.


· Right click Task Sequences, choose New Task Sequence

· Give it an ID, Name and optionally a comment

· Choose Standard Client Task Sequence (I won’t look into the other options in this document, though I will probably test them further on)

· Choose your desired Image (Operating System)

· Fill in the other information to suit your needs

· Do not specify an Administrator Password at this time

· Right click or double-click to configure your newly created Task Sequence


Have a look at all the default options from your newly created Task Sequence. Modify and test-deploy to look at different options. Google and learn. I won’t go into details of all of the options as it would take forever. Information is available online, just use it.


I haven’t modified that much as my current image has most of the important settings already. I had a look at the partitioning (Preinstall/Format and Partition Disk) and changed the volume label. 100% disk use was good for me, so I didn’t change that. It’s easy to change it later according to your needs.


I have a custom script that configures MDT to allow the graphics driver auto detect method to set the screen resolution. Thanks to Johan Arwidmark for this script. Won’t paste the code here as it’s a bit too long…

(Source: )


I also have a custom script that renames and disables the local Administrator account. It runs last in the “State Restore” process of the deployment. It’s added via Add/General/Run Command Line and moved to the correct place in the sequence. It runs a command line “cscript.exe “%SCRIPTROOT%\DisableAdmin.vbs” which basically runs a custom script from the default “Scripts” dir. Included in this script is the following information:


strComputername = “.”

Set objUser = GetObject(“WinNT://” & strComputername& “/Administrator”)


 objUser.SetPassword “thePasswordFromCustomSettings.ini”

 objUser.AccountDisabled = True



 Set objWMIService = GetObject(“winmgmts:\\” & strComputerName & “\root\cimv2”)


 Set colAccounts = objWMIService.ExecQuery (“Select * From Win32_UserAccount Where LocalAccount = True And Name = ‘Administrator'”)


 For Each objAccount in colAccounts

     objAccount.Rename “OldLocalAdm”



(Source: )





Now it’s time to test the deployment process. You should already have configured wds with a boot image so that the clients can boot from it. You should also have specified the correct settings in Bootstrap.ini so that the Deployment Share (images) can be found from wds.


· Make an “empty” virtual machine

· Configure it to pxe-boot

· Start it

· Press F12 to boot from the network

· Your WDS-server is found

· Start Deployment and follow on-screen instructions 



Fig 7. Actual Deployment process/progress





Production environment


The setup is obliviously different in the production environment. The wds-server is on our internal network, but has access to the public network (AD) via NAT. I’ll start with a picture of the whole setup to give you an idea of the configuration. 



                                                                                                             Fig 8. Production Setup


Basically what we have here is a linux computer that is used to NAT/IP masquerade the traffic to the internal network. On the internal part we have a different linux dhcp-server that gives out leases to all of our internal clients. Three different subnets are configured, but the .17.x is used for our wds-server. The linux dhcp server will have to be configured to understand to boot from the windows wds-server. More on that later on.

The steps for installation are basically the same as for the lab environment, except for the dhcp-server and (no) AD. Here’s a list:


· Installed Windows Server 2008 R2 Datacenter (in a Virtual Machine on a VMware ESXi 3.5 server)

· Configured the server:

o   Network with static IP-address.

o   Added a second (virtual) hard drive. It’s best practice to have the deployment share on a different drive/partition.

· Joined the server named “wds” to the production domain

· Installed  the necessary software:

o   .NET Framework 3.5 from Server Manager, Features

o   Windows Automated Installation Kit (AIK) v. 3.0

o   Microsoft Deployment Toolkit (MDT) 2012 Update 1 

· Installed necessary Server Roles for WDS:

o   WDS Server Role

o   DNS Server Role (not actually used, more on the configuration later on)

o   Didn’t install DHCP Server Role, as I’m using the existing linux dhcp server  (more on the configuration in next chapter)

· Copied a plain Windows 7 Enterprise 64-bit image to the server

· Copied  our production .wim-image to the server (also Windows 7 Enterprise 64-bit)


The steps for MDT are exactly the same as in the Lab environment. Same goes for WDS, except that I configured the server to boot from the production share. Some small changes in CustomSettings.ini (Rules) are made, for example domain and username/password.



Linux DHCP


As I said before, I decided to use our existing linux dhcp-server for pxe-booting. For this to work, I added the following to /etc/dhcp3/dhcpd.conf :


subnet netmask {


        option domain-name-servers 130.232.213.x;

        # option domain-name-servers;

        option routers;


        option tftp-server-name “”;

        option bootfile-name “boot\\x86\\wdsnbp.com00”;


and restarted the dhcp-server, /etc/init.d/dhcp3-server restart.




Now the test client booted nicely. Here’s a screenshot:



Fig 9. PXE-booting from wds.


All wasn’t that good though. My Deployment Share wasn’t accessible due to dns errors. I got “A connection to the deployment share (\\WDS\DeploymentShare) could not be made”.

I pressed F8 to get into console mode and to do some error checking. I could ping my wds server via IP-address so the problem was dns. A quick configuration check on the linux dhcp server revealed the problem, my dhcpd.conf had the dns option:

domain-name-servers 130.232.x.x; (external).

I changed this to our own internal dns server ( dns server was also configured with forwarders to our external network (130.232.x.x.) so name resolution works for both internal and external hosts. Good idea in theory, not in practice. Here’s a screenshot of DNS on the wds server.



Fig 10. Windows DNS Manager on wds-server


WindowsPE still can’t access \\wds via short name. Somehow I get the external dns-suffixes even though I have configured the hosts to use the internal dns server (and suffixes) in dhcp.conf. 

Also, option domain-search “”, “”, “”; in dhcpd.conf gives me errors and I have no idea why Sad smile


root@iloinen:/etc# /etc/init.d/dhcp3-server restart

dhcpd self-test failed. Please fix the config file.

The error was:

WARNING: Host declarations are global.  They are not limited to the scope you declared them in.


Well I tried declaring them globally also… still no luck.


/etc/dhcp3/dhcpd-iloinen.conf line 167: unknown option dhcp.domain-search

option domain-search “”

Configuration file errors encountered — exiting


I finally gave up with dns names and used IP addresses instead. It’s not the prettiest solution, but at least it’s working. Clients are now contacting \\\DeploymentShare instead of \\WDS\DeploymentShare. Success, finally Smile


Note to self: If a computer exists in AD, it won’t join the domain during deployment. From logs:


12/14/2012 09:34:46:923 NetpModifyComputerObjectInDs: Computer Object already exists in OU:


There is probably an easy workaround for this, but for me the easiest way was to remove the computer from AD before deployment.


My image is now finally deployed to a (physical) test computer. Success! Smile Further enhancements/tweaks can of course be done, and I’m writing about a few of them now. Total time for deployment (12GB compressed image) was about 30minutes over 1Gbit LAN.



Adding Applications


One thing you probably want to do is add different applications to your image after/during deployment. It’s quite easy (at least for basic applications), and the thing you need are the switches for silent install and so on. I tried adding Adobe Acrobat Reader 11 to my deployment, and the installation went fine during installation. I followed a guide from:


and as the forum post says, the “AdbeRdr11000_en_US.exe /sPB /rs”  also worked for me. I guess the installation of different programs is about the same, so I won’t try any other at the moment. Time will tell what I need.



Adding Drivers


One more thing you probably want to customize is different drivers. You can add/inject out-of-box drivers from different vendors. This is very useful, as you can have different setups for workstations and laptops and so on. Update: I suggest that you have a look at selection profiles (or similar) before you mess around with other driver options:


Our regular workstations (Osborne Core 2’s, a bit on the older side) works fine without (almost) any additional drivers, but I’ll add the missing ones with a trick learned from a video.



Laptops (Lenovo)


Our Department uses Lenovo Thinkpad laptops, which uses various drivers. I will test to inject a couple of these. Lenovo have made an (excellent) administrator tools which will help you with the drivers. Instead of injecting (and downloading) a driver one by one, you can use programs that will do all of this automatically. Well, semi-automatically anyways. They’re called ThinkVantage Update Retriever and ThinInstaller. Google “thinkvantage update retriever mdt” and you will find a word document with instructions.


Here are my steps:


· Downloaded Lenovo Update Retriever 5.00 and installed it on the wds/mdt server

· Downloaded Lenovo ThinInstaller 1.2 and installed it on the wds/mdt server

· Did not completely follow the instructions in the document for setup instructions.

o   It was suggested to add drivers to Out-of-Box Drivers section. If you/I did this, drivers were added to the boot image which made it grow to a huge size. I only need LAN (and possibly HDD-drivers) for the boot image. In my case, I didn’t need either because WinPE found my HDD and LAN card without additional Out-of-Box Drivers.

· Skipped to Working with ThinInstaller-step of the guide

· Followed guide, and added a step (after restart-step in Postinstall section) in my task sequence for copying ThinInstaller files from server to c:\thin on the clients.

· Next step is to create a command after the previous step that actually runs the ThinInstaller and installs all the necessary software and drivers on the client.

The command used here is:

C:\Thin\ThinInstaller.exe /CM -search A -action INSTALL -noicon -includerebootpackages 1,3,4 –noreboot

· Run a test-Deployment on our Departments Lenovo T500

· Various results, didn’t work that great actually. Too many details to go through here.

· Ended up with plan B, which was installing Lenovo’s System Update via MDT’s “Applications”. Again, not the prettiest solution but at least you have the option of installing this software and it doesn’t take that long to install missing drivers/software afterwards.

Our main installation scenario is workstations anyway, so I’ll put my energy on other fields of the deployment process.



Workstations (Osborne)


Nothing special about this, same procedure as with laptops except different Task Sequence without the Lenovo-stuff.


· Installed our production image

· Installed missing drivers via Windows Update after deployment completed

· Copied the drivers that were installed via Windows Update (using the trick from the video described earlier)

o   From: Clients C:\Windows\system32\DriverStore\FileRepository\

o   To: mdt-server

o   Drivers with newer date than 28.11.2012 (dates after my image making/sysprepping)

· Injected the drivers into MDT

· Drivers will be used in next deployment. Tadaa 🙂

· Update: now using selection profiles instead


Now that all of the “new computer” installations are working the way I want, I decided to go ahead and try refresh and replace installations. This is handy if you get a new computer and want to save the data from your old computer for example.



Refresh installation


I decided to try a refresh installation so I would know what it does. I didn’t do this on physical hardware, just in my lab environment.


“Basically you need to launch the deployment wizard from the OS you’re about to replace.

There are a variety of ways to do this but I usually browse to my deployment point on the network and run the BDD_Autorun.wsf within the scripts folder (an example is \\<server>\distribution$\Scripts\BDD_Autorun.wsf).

It will give you the option to either Refresh or Upgrade this computer, choose refresh, finish the wizard stuff and you should be good to go.”




I ran BDD_Autorun.wsf and sat back to watch the magic. The result was a “refreshed” computer, just the way I left it before the refresh including all my documents and all extra folders I had created.



Replace installation


I decided to try out the replace installation as well. This is more likely to come in handy when new computers arrive at the Department and we want to save all the data from the old one.

Here’s some information copy/pasted from Andrew Barnes’s scripting and deployment Blog.


An existing computer on the network is being replaced with a new computer. The user state migration data is transferred from the existing computer to share then back to the new computer. Within MDT this means running 2 task sequences, Replace Client Task Sequence then a task based on the Standard Client Task Sequence template. The Replace Task Sequence will only back up your data and wipe the disk in preparation for disposal/reuse.


  • Task Sequence deployment from within Operating System or Bare Metal
  • Task Sequence run on Source machine captures user state
  • New machine begins using PXE boot or boot image media
  • User state must be stored on a share or state migration point
  • User state and compatible applications re-applied on new machine





Pic source:


·         I created a new Standard Client Replace Task Sequence on the wds server.

·         I run BDD_Autorun.wsf (from \\wds-server\DeploymentShare$) from the computer that would be replaced, which launches the Windows Deployment Wizard.

·         I chose my newly created Standard Client Replace Task Sequence from the list of Task Sequences.

·         Didn’t work and ended up with errors. Solution was to do some modifications to CustomSettings.ini:







Using this modification, the User Data got stored in MigData on the wds-server.

Note: I could also have used the method described later, which is removing stuff from Customsettings.ini…



·         I now ran a Standard Client Task Sequence to do a new installation and to restore the user data from MigData.  Result: Standard Client Task Sequence did NOT restore the user data.


·         Had to some more reading about the subject, starting with:

“The Client Deployment Wizard will ask if you want to restore user state and where the user state is stored.  The Restore User State step in the task sequence would then use USMT to restore the user state to the computer being deployed”.


This was not true in my case, the Wizard didn’t ask me anything. Time to check why.


·         Even more reading in:—deploying-images-to-target-computers.aspx

·         Easiest solution for me was to remove all the automatic stuff I had added in Customsettings.ini. I changed (commented out) the following so I could manually answer the questions:








·         I ran the replace task sequence from the source computer again. I now had the option to tell mdt where to save the backup and whether I wanted to restore the user data into the new installation. I saved the files to the wds-server.

·         Created a new virtual machine and deployed Windows via a Standard Client Task Sequence. Manually answered questions in the wizard. I now had the option to restore the user data.

·         Success Smile

·         (I later noticed that SkipUserData & SkipDeploymentType were the correct options to solve my little mystery. I don’t mind answering a couple of questions and I don’t have the need for UDShare and UDDir etc automatically defined).

Source:—customizing-target-deployments.aspx )


There’s also an UPGRADE installation/deployment option, but I won’t test it because we do not have the need for it. You can’t upgrade from WinXP to Win7/8 so in our case it’s no use.





Windows 8 Deployment


I tried deploying a plain and a production image of Windows 8 also. It’s just about the same procedure as with Windows 7, but you have to uninstall WAIK and install the new Windows Assessment and Deployment Kit (Windows ADK) for Windows 8 for proper deployment.

Also, update your deployment share and copy over the new boot image to the wds server (ADK uses a new version of Windows PE).

Other than that, everything seems to be working including Task Sequences and so on.




Tried (successful) Win 8-deployment (4.3.2013) and here are a couple of other problems:


with these problems fixed everything seemed to be working just fine. (Actually uninstalled DNS completely as I didn’t need it)



Note 2:


I’ve now (5.3.2013) moved over to better driver management with selection profiles.

Good article about this:



Note 3:


Learn how to deploy with UEFI in my post Converting a windows 8 BIOS Installation to UEFI 




 That’s it for this document. It’s been fun and I’ve learned a lot Smile






Mentioned in the text.




Building a small Windows Server 2008 R2 cluster

Note: This document was written in Word back in February 2012. I’m just posting it now when I’ve entered the blogging area Smile


I had some old servers and some spare computers so I decided to build a test cluster with both physical and virtual servers. My main goal was to test out Microsoft’s Hyper-V virtualization solution as I’ve been working with VMware ESXi until now. I also wanted to try out System Center Configuration Manager 2007 as it is a very useful piece of software.


The following configurations were used:




Rack servers


primergy1: Domain Controller, DNS, DHCP


·         Fujitsu Siemens Primergy RX300 S2

·         2 x Intel Xeon 3.20GHz CPUs

·         4GB RAM

·         Dual NIC

·         6 x 146GB SCSI HDDs in hw raid-5

·         Windows Server 2008 R2 Enterprise SP1



primergy2: Storage Server (iSCSI)


·         Fujitsu Siemens Primergy RX300 S2

·         2 x Intel Xeon 3.20GHz CPUs

·         4GB RAM

·         Dual NIC

·         6 x 146GB SCSI HDDs in hw raid-5

·         Windows Storage Server 2008 R2 Enterprise SP1



primergy3: System Center Virtual Machine Manager 2008 R2


·         Fujitsu Siemens Primergy RX200 S2

·         2 x Intel Xeon 3.20GHz CPUs

·         4GB RAM

·         Dual NIC

·         2 x 146GB SCSI HDDs in hw raid-1

·         Windows Server 2008 R2 Enterprise SP1


Hyper-V servers



·         Intel Core 2 Duo 2.33GHz

·         8GB RAM

·         Dual NIC

·         250GB SATA HDD

·         300GB mounted iSCSI Clustered Disk space from primergy2

·         Windows Server 2008 R2 Enterprise SP1


VM1: System Center Configuration Manager 2007 r3 (sccm)

Windows Server 2008 R2 Enterprise SP1 as OS


VM2: Windows 7 64 bit (win7client1)


VM3: Windows Server 2008 R2 Enterprise (failovertest, running from iSCSI clustered disk)




·         Intel Core 2 Duo 2.13GHz

·         4GB RAM

·         Dual NIC

·         300GB SATA HDD

·         300GB mounted iSCSI Clustered Disk  space from primergy2

·         Windows Server 2008 R2 Enterprise SP1


VM1: System Center Operations Manager 2007 R2 (scom)

Windows Server 2008 R2 Enterprise SP1 as OS






Windows Server 2008 R2 Enterprise SP1

Windows Storage Server 2008 R2 Enterprise SP1

Windows 7 SP1 64 bit


Microsoft Hyper-V with failover clustering

System Center Configuration Manager 2007 R3

System Center Virtual Machine Manager 2008 R2

System Center Operations Manager 2007 R2

Microsoft SQL Server 2008 R2 Enterprise

Microsoft SQL Server 2005 Express

Microsoft iSCSI Software Target




Fig 1. The cluster




Fig 2. Cluster network diagram

Fig 1 above shows you a picture of the actual cluster and fig 2 shows a picture of the network diagram showing the actual connections between different servers.



Primergy 1  (2, 3 later)

The project started with OS installation on the three Fujitsu Siemens rack servers. These servers are not able to handle virtualization so they can only run one (Windows) OS per server, Windows 2008 R2 Enterprise in my case. These servers were designed back in the days when Windows 2008 wasn’t even in beta stage, and the installer had problems detecting the onboard SCSI card. Luckily I was able to download some drivers (available only for Windows Server 2003 and older) that worked and I got the servers up and running quite fast. I then run Windows update on all of them to ensure they were up to date even if it was only for test lab usage.


Now it was time for some cabling and configuration of the network. I hooked up all the three servers to a gigabit switch for internal communication. I started configuring primergy1 as it would become the Domain Controller. I enabled Active Directory Domain Services, DNS Server and DHCP Server roles.  The servers were configured with the following static configurations:


Network interface 1 (internal network):

Primergy1:                         Primergy2:                         Primergy3:





I had previously changed the server names so now I just joined primergy2 and primergy3 to the domain. My domain is called jgs.test. I didn’t install anything on primergy2 and primergy3 just yet. I’ll get back to those servers later on.

I wanted to create only an internal domain network so I wouldn’t mess around with other (external) networks. When the internal network was configured I added an extra network cable to primergy3 for external access with RDP. Primergy3 is used to administer the whole cluster. I use RDP from that server to access all of the other nodes/servers/machines on the internal network. Configuration for second NIC on the primergy servers:


Network interface 2 (external network):

Primergy1:                         Primergy2:                         Primergy3:

Not connected                    Not connected                    DHCP

Set to obtain from an external dhcp server



Hyperv1, 2

Now it was time for Hyper-V server installations. I started by installing Windows 2008 R2 Enterprise on both servers. After the installation I enabled the Hyper-V role (and the failover feature which I will try later on). I hooked up both servers to the gigabit switch with the other servers for internal communication. The servers were configured with the following static configurations:


Network interface 1 (internal network):

Hyperv1:                            Hyperv2:                           





Network interface 2 (external network):

Hyperv1:                            Hyperv2:                           

DHCP                                  DHCP from from

an external dhcp server     an external dhcp server


Now I joined the servers to the domain. After this it was time to configure networking in the Hyper-V manager on both servers. The configurations are almost identical on both servers so I’ll only write about hyperv1. I know that you should use dedicated physical network adapters for different tasks in the cluster, but as this is only a test scenario and not a production environment, I’ll settle for two adapters per host.

I started Hyper-V Manager and went to Virtual Network Manager. This is the place for all networking options in Hyper-V. You have to create virtual networks for the virtual machines. The network(s) can be external, internal or private. I chose external as I wanted to use the network cards for external communication. I built one virtual network for internal usage (10.10.x.x, domain) and one for external usage (192.168.x.x). Both were set as external in Virtual Network Manager though. In the beginning of each virtual machine installation I chose the external network as default. I chose this option because I want all of the new installations to be able to access the internet at first. This is mostly for updates and activation. After updates and activation, I switch over from external to internal network.

Now it’s time for the actual virtual machines to be installed. This is quite straight forward, at least in my configuration, as I have all of the virtual machines stored locally on disk. (I have now expanded my configuration and tried failover configuration with shared storage. I’ll write more about that later on).

Right click on the server name in Hyper-V Manager and choose new virtual machine. Follow the guide and install either from disk image (.iso) or from physical cd/dvd-rom. All options are rather self explanatory. Remember to choose the right network settings and you are good to go. Below is a screenshot of Hyper-V Manager:



Fig 3. Hyper-V Manager


Virtual Machines


System Center Configuration Manager 2007 r3 (sccm, on hyperv1)

I installed a new virtual machine on hyperv1 called sccm. I installed Windows Server 2008 R2 Enterprise as the base in this machine as this was required for sccm. I then followed guides for both sccm installation and configuration. The installation guide I followed is called Install SCCM 2007 on Windows Server 2008 R2 – Step by Step and can be found at: Thanks to the author for the guides! I’ll try to recap the guides in a couple of steps.


1.      Create sql and sccm domain admin accounts in Active Directory on the Domain Controller (primergy1).


2.      Install IIS server role on the sccm server. Add a couple of IIS Role Services and Server Features.


3.      Go to Server Manager and configure WebDAV. Lots of permission options.


4.      Install SQL Server 2008 R2. Tick Database Engine Services. Tick Management Tools (Basic and Complete).


5.      Use the sql admin/sccm admin account created earlier for service accounts.


6.      Prepare Active Directory for sccm -> Extend Active Directory Schema on the Domain Controller. From the sccm install media > SMSSETUP > BIN > 1386 > extadsch.exe.


7.      Create some Active Directory objects go to a domain controller > Start > Administrative tools > ADSI Edit > Action > connect to. Lots of options, but most important is to allow your sccm-server (sccm) and sccm-admin Full Control.


8.      Install SCCM. Follow the guide. Apply sccm updates.


9.      That’s it. Now it’s time to configure sscm. I followed yet another guide from the same website. It’s called SCCM 2007 Initial Setup and Configuration. It can be found at:


10.  Just followed the guide. Note to self: worked pretty well except for some permission problems in sccm (the sccm client wouldn’t install on client computers). This was due to missing permissions in System Center Configuration Manager – Site Database – Central Site – Site Settings – Client Installation Method – Client Push Installation – Properties. Added a domain administrator account with more rights than the sccm-admin account and everything worked fine.

There was also a permission error on the Domain Controller. In Active Directory Users and Computers – System – System Management – Properties – Security, make sure that the computer “sccm” has Full Control.

BIG thanks to by friend Mats Hellman for these tips (and all other tips).


I now have a working sccm environment. I‘ve (push) installed sccm clients to all the servers and computers. My next step is to create installation packets for software installations. After that I’ll probably look at whole operating system installations via PXE. Below you have a screenshot of sccm in action:




Fig 4. System Center Configuration Manager 2007



Windows 7 64-bit (win7client1, on hyperv1)

I installed a new virtual machine on hyperv1 called win7client1. I installed Windows 7 64-bit as operating system. I joined the machine to the domain and disabled the firewall so that the sccm client could be installed without problems. I pushed the sccm client from the sccm server to this client. That’s it for this machine (for now).


Windows Server 2008 R2 Enterprise (failovertest, on hyperv1)

This virtual machine was installed to test the failover cluster configuration within Hyper-V. More about that later on.


System Center Operations Manager 2007 R2 (scom, on hyperv2)

This virtual machine gets installed after primergy2 and primergy3. I’ll get back to this one later in the document.



Primergy 2

As I said before, the project started with OS installation on the three Fujitsu Siemens rack servers, including Primery2&3. Windows Server 2008 R2 Enterprise SP1 was already installed on this server and all I had to do was to install the iSCSI component. There’s a very good guide for this called How to setup iSCSI on Windows Server 2008, available from:


I’ll try to recap my steps:


1.      Start Microsoft iSCSI Software Target from Administrative Tools

2.      Create two new ISCSI Targets called iscsi-target1 and iscsi-target2. I made two targets because you can’t share the same target on the (hyper-v) servers if they aren’t configured as a failover cluster. I haven’t looked into this just yet, so I’m fine with having a separate iSCSI target for each server.

3.      Give the IP-address/host for the initiator, IQN Identifier (the computer that will connect to this ISCSI target).

4.      Create a virtual disk for iSCSI target. One for each target in my case.

5.      Go to the server/computer that will “mount” the iSCSI drive. Go to administrative tools and start iSCSI Initiator.

6.      Go to the discovery-tab and enter IP-address for the iSCSI target server. In my case It should now discover the iSCSI target. Others steps are in the guide.

7.      Format the new drive. It can now be used as a normal hard drive attached to the computer.

Note: I had “offline” problems on one of the hyper-v servers. It got fixed by following the steps on: How to change default SAN disk status from offline to online

8.      I now have an iSCSI disk on both hyper-v servers. I’m going to install my next virtual machine on this drive instead of local storage just for the fun of it.

9.      Later on I created new targets for use with failover clustering.


Below you have a picture of the Microsoft iSCSI Software Target main window.



Fig 5. Microsoft iSCSI Software Target



Primergy 3

It was now a suitable time to install System Center Virtual Machine Manager 2008 R2 because it is dependent on the Hyper-V servers. Quote from Microsoft’s own site:

System Center Virtual Machine Manager 2008 R2 helps enable centralized management of physical and virtual IT infrastructure, increased server utilization, and dynamic resource optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure.


Nothing complicated about this installation. Windows Server 2008 R2 Enterprise SP1 was already installed on this server and now I just installed System Center Virtual Machine Manager 2008 R2 (SCVMM).  SCVMM requires sql so I installed the bundled Microsoft SQL Server 2005 Express Edition. Later on you enter the servers you want to administer, in my case hyperv1 and hyperv2. From here on, you can add or remove virtual machines from scvmm instead from the local hyper-v servers. Small installation guide if needed:


Below is a screenshot from System Center Virtual Machine Manager 2008 R2 displaying connections to hyperv1 and hyperv2:



Fig 6. System Center Virtual Machine Manager 2008 R2



System Center Operations Manager 2007 R2, continued (scom, on hyperv2)

Last but definitely not least we have System Center Operations Manager 2007 R2. I saved this one for last because it’s more or less dependent on all other machines. It is “just” a (health) monitoring tool for all my virtual and physical servers/machines. In my initial configuration I had this one as a fourth physical server but it turned out that the server had some hardware problems L


I installed a new virtual machine on hyperv2 called scom. I installed Windows Server 2008 R2 Enterprise as the base operating system in this machine as it was required for sccm. I then joined it to the domain and installed SQL Server 2008 R2 as a SQL server was required for scom. At the time scom 2007 was released, there wasn’t support for SQL Server 2008 R2. No problem though, I just had to do some small tweaks before the scom installation. A good guide for this, Installing SCOM 2007 R2 on SQL 2008 R2, is available at:

After doing all the tweaks, the installation went just fine. After that I just fired up the application and did some required configuration settings. You can do/monitor A LOT with scom, but the initial configuration was more than enough for my little cluster test. Below is a screenshot of System Center Operations Manager 2007 R2:




Fig 7. System Center Operations Manager 2007 R2





Hyper-V Failover Clustering


Lastly I decided to try out Failover Clustering with Hyper-V. I went to Server Manager and enabled the Failover Cluster feature. I then followed a guide called Creating Hyper-V Failover Cluster (Part 1), available from:

I had already done the preparation work like setting up Windows 2008 Storage Server for iSCSI. I followed the guide and created two iSCSI Targets called “Storage” and “Quorum”. I added the disk resources to hyperv1 and hyperv2 (with help from the guide). With this part done, it was now time to create the actual failover cluster. Firstly I started the Failover Cluster Manager and validated my configuration which passed the test.

I then started Hyper-V console and created a new virtual machine. I didn’t start it just yet. I then minimized Hyper-V console and maximized Failover Cluster Manager. I right clicked Services and applications and selected Configure a Service or Application. I chose virtual machine from the bottom of the list and clicked next. I selected my newly created machine and clicked next. This virtual machine is now configured as highly available. I restored Hyper-V console and started up my virtual machine. It now installed Windows Server 2008 R2 as a new highly available virtual machine. That’s it for installation.

My cluster is now able to migrate the virtual machine “failovertest” from one node to another.





Fig 8. Failover Cluster Manager




Final words…


That’s it for now. This has been a really fun project and I’ve learned a lot on the way. Hyper-V turned out to be really easy to use and a fair competitor to VMware. System Center Virtual Machine Manager 2008 R2 supports both Hyper-V and VMware hosts so you can manage everything from one platform which is a very nice solution.


I will look more into System Center Configuration Manager 2007 (sccm), as it is an interesting and very useful product.


Big thanks to Mats Hellman for helping me out with problems on the way and for giving me ideas on what kind of infrastructure to build for this small scale test environment.






Mentioned in the text.