Exchange 2007 to 2013 migration

This time around I’ll write about all the steps I used for a successful Exchange 2007 to 2013 migration in a small size company (about 40 users). I’m not quite sure (yet) if we will do the upgrade to 2016 directly afterwards, however I’ll make a new 2013 to 2016 migration blog post if we decide to do so. At this point I can also state that it is impossible/not supported to do a migration from Exchange 2007 to Exchange 2016 directly, at least without 3rd party software. You can have a look at the “migration chart” at https://technet.microsoft.com/en-us/library/ms.exch.setupreadiness.e16e12coexistenceminversionrequirement%28v=exchg.160%29.aspx for example. If you want to pay extra money and go directly from 2007 to 2016 it should be possible with CodeTwo Exchange Migration, http://www.codetwo.com/blog/migrate-legacy-2003-or-2007-exchange-to-exchange-2016/ for example.

Anyways, onto the migration stuff itself. As always you should start with the homework. This time I don’t have that many sources for you – instead some quality ones which gets the job done properly. Start off by reading:

The second link is awesome! Read it slowly and carefully. You’ll be a lot smarter in the end. Lots of stuff to think of, but very nicely written.

I didn’t follow the guide exactly to the word (as usual), but I couldn’t have done the job without it. Some changes for our environment:

  • We do not use TMG. All TMG steps were bypassed and were replaced by similar steps according to our own firewall policies.
  • We have a Linux postfix server that handles all incoming email. It also handles antivirus and spam checking of emails. After these checks are done, it forwards email to the Exchange server.
  • Storage configuration / Hard drive partitions / Databases weren’t created the same way as in the guide.
  • Certificates were renewed by our “certificate guy”. No need for complicated requests etc.
  • No stress tests and/or analyses were done. No need.
  • Configured recipient filtering (there’s a chapter about it).
  • A script which deletes old IIS and Exchange logs was introduced (there’s a part written about this also).

 

My own steps for the migration process:

On the old server:

  • Patched Exchange Server 2007. Also installed the latest Update Rollup (21). You should have a fully patched (old)server before installing/introducing a new Exchange Server in the domain.

          exchange2007_Update_rollup21

  • Took screenshots of all current configurations (just in case). Most of the settings will migrate however. Stuff to backup are nicely documented in the above homework-link.
    • Namespace
    • Receive connectors
    • Send connectors
    • Quotas
    • Outlook Anywhere, OWA, OAB, EWS, ActiveSync settings
    • Accepted domains
    • Etc.. etc. that would be of use
  • Got a new certificate which included the new host legacy.domain.com
  • Installed the new certificate (on Exchange 2007 at first):

          exchange2007_install_new_cert

 

On the new server:

  • Installed a new server, Windows Server 2012 R2.

                   exchange2013_rsat-adds-installation

      • Moving on to the other prerequisites:

                   exchange2013_prerequisites

 

Moving on to the actual Exchange installation

  • Had a look at my partition table, just to check that everything looked OK. (it did):

          exchange2013_partitions

  • The partition layout should be quite self-explanatory so I won’t comment on that. I will however tell setup to use the existing partitions. I actually resized the partitions a bit after this screenshot…
  • Once again following information from the excellent guide, I used the latest CU as installation source (NOT the installation DVD/ISO).

               exchange2013_prepare_schema

               exchange2013_prepare_AD_and_domain

 

  • Actual installation (note paths for DB and Logs):

          exchange2013_installation_from_powershell

  • Done. Moving over to post-installation steps

 

Post-installation steps

  • Checking and changing the SCP. This should be done asap after the installation.

          exchange_checking_scp

          Checking SCP.

          exchange_changing_scp

          Changing SCP.

  • Everything looks good!
  • Next, we’ll install the new certificate on the Exchange 2013 server:

          exchange2013_install_new_cert

           A simple “import” will do the job.

  • Also have a look at the certificate in IIS (and change to the new one if necessary):

          exchange2013_install_new_cert_in_IIS

           exchange2013_outlook_anywhere

Following the guide you should change the authentication to NTLM:

“As Outlook Anywhere is the protocol Outlook clients will use to communicate with Exchange Server 2013, replacing MAPI/RPC within the LAN, it’s important that these settings are correct – even if you are not publishing Outlook Anywhere externally. During co-existence it’s also important to ensure that the default Authentication Method, Negotiate, is updated to NTLM to ensure client compatibility when Exchange 2013 proxies Outlook Anywhere connections to the Exchange 2007 server”.

  • Moving over to the send and receive connectors.
    • The send connector automatically “migrated” from the old server.
    • The receive connector did NOT migrate from the old server. This is because Exchange 2013 use different roles for transportation compared to 2007. 2007 included only Hub Transport, but Exchange 2013 use both Hub Transport and Frontend Transport. For those of you interested in this change, read http://exchangeserverpro.com/exchange-2013-mail-flow/ and http://exchangeserverpro.com/exchange-2013-configure-smtp-relay-connector/ for example.
    • The CAS receives mail on port 25 and forwards it to the “backend” mailboxes that listens on port 2525.
    • I left the “Default Frontend servername” with its default settings:

               exchange2013_default_frontend_recieve_connector

    • …and configured a new SMTP relay-connector which has “our settings”. This connector has to be “Frontend Transport”. You cannot create a new connector as Hub Transport. You’ll be greeted by an error message if you try:

              exchange2013_recieve_connector_error

Information about this can be found at:

http://markgossa.blogspot.fi/2016/01/bindings-and-remoteipranges-parameters-conflict-exchange-2013-2016.html
http://exchangeserverpro.com/exchange-server-2013-upgrade-fails-due-to-receive-connector-conflicts/

If you want to create a new receive connector that listen on port 25, you can do this but you have to create it using the Frontend Transport role if you have either an Exchange 2016 server or an Exchange 2013 server with both the CAS and MBX roles installed on the same server”.

All our University email (and this specific company’s email) is received via a Linux postfix server. This server handles all spam filtering and antivirus. After these checks are done, the mail is delivered/forwarded to Exchange.

exchange2013_aasmtp_relay_security

exchange2013_aasmtp_relay_scoping

 

After these steps were done, I continued with:

  • Configuring mailbox quotas to match those on the old server.
  • Configuring the Offline Address Book to be stored on the new server.
  • Checking the log locations – should the transport logs be moved to another location or left at the default location? I changed them so they will go to the log-partition. In the end, this is just a small percentage of all logs generated. All other non-transport logs gets filled under C:\Program Files\Microsoft\Exchange Server\V15\Logging. I’m using a PowerShell script to delete all logs older than 30 days, and the same goes for the IIS logs in C:\inetpub\logs. The script looks like this, and is run via Task Scheduler daily:

$DateToDelete = 30
$StartFolder = “C:\Program Files\Microsoft\Exchange Server\V15\Logging”
$Year = (Get-Date).Year
$Day = Get-Date
Get-ChildItem $StartFolder -Recurse -Force -ea 0 | where{!$_.PsIsContainer -and $_.LastWriteTime -lt (Get-Date).AddDays(-$DateToDelete)} | ForEach{Add-Content -Path “Delete Log $Year.log” -Value ” $_.FullName”; Remove-Item -Path $_.FullName }
$DateToDelete = 30
$StartFolder = “e:\Logs”
$Year = (Get-Date).Year
$Day = Get-Date
Get-ChildItem $StartFolder -Recurse -Force -ea 0 | where{!$_.PsIsContainer -and $_.LastWriteTime -lt (Get-Date).AddDays(-$DateToDelete)} | ForEach{Add-Content -Path “Delete Log $Year.log” -Value ” $_.FullName”; Remove-Item -Path $_.FullName }
$DateToDelete = 30
$StartFolder = “c:\inetpub\logs”
$Year = (Get-Date).Year
$Day = Get-Date
Get-ChildItem $StartFolder -Recurse -Force -ea 0 | where{!$_.PsIsContainer -and $_.LastWriteTime -lt (Get-Date).AddDays(-$DateToDelete)} | ForEach{Add-Content -Path “Delete Log $Year.log” -Value ” $_.FullName”; Remove-Item -Path $_.FullName }
exit

And the command to run from task scheduler:

  • PowerShell.exe -NoProfile -ExecutionPolicy Bypass -Command “& ‘D:\pathtoyour\scripts\clearlogging.ps1′”
    • Runs daily at 03:00

As you’ve probably noticed from my Exchange installation screenshots, I already pointed the Transaction logs to a different partition in the installation phase (E:\Databases\DB1). These logs don’t need manual deletion however, they get deleted via the backup solution automatically (Veeam). The key here is that the backup software has to be Exchange aware. The other logs at e:\ are the Transport logs (E:\Logs), which are only a tiny part of the whole logging structure (C:\Program Files\Microsoft\Exchange Server\V15\Logging) in Exchange. You could leave the Transport logs in their default location though, as the above script will go through that directory also…

 

Recipient filtering / Stopping backscatter

As a nice bonus, Exchange 2013 can now handle recipient filtering (filter out non-existent users) properly. For more information about recipient filtering read:

https://technet.microsoft.com/en-us/library/bb125187%28v=exchg.160%29.aspx
http://exchange.sembee.info/2013/mbx/filter-unknown.asp
https://www.roaringpenguin.com/recipient-verification-exchange-2013

The filtering CAN be done without an Exchange Edge server even though Internet will tell you otherwise. We enabled it on our postfix server following tips found on https://www.roaringpenguin.com/recipient-verification-exchange-2013. Installation on the Exchange-side on the other hand looked like this:

exchange2013_recipient_filtering1
exchange2013_recipient_filtering2

exchange2013_recipient_filtering3

I also enabled Anonymous users on the “Default receive connector”:

exchange2013_default_recieve_connector

Happy days! We can now filter out non-existent users on Exchange rather than manually on the postfix server.

I also checked that recipient filtering was active and working:

exchange2013_recipient_filtering_test_telnet

Yes, it was 🙂

With all this done I now moved forward with the configuration. Again, following http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part13.html

 

Getting ready for coexistence

I’ll start off by copy/pasting some text.

“With our database settings in place and ready to go, we can start thinking about co-existence – before we do though, it’s time to make sure things work within Exchange 2013! So far we’ve got our new server up and running, but we’ve still not logged in and checked everything works as expected”. Source: http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part13.html

With this information in mind, I started testing according to the above link. The chapter of interest was “Testing base functionality”. All tests passed. Very nice 🙂

With all tests done, and all users aware of the migration, I did the following after work hours:

    • Asked the “DNS guy” to make a CNAME record for legacy.domain.com pointing to the old server.
    • Changed all virtual directories on the old server to use the name “legacy”.
      • Things to remember:
        • No external url for Microsoft-Server-ActiveSync.
        • Autodiscover Internal URL / SCP record on both Exchange 2007 and Exchange 2013 server should point to the new server.
    • DNS records are changed to point to the new server.
      • autodiscover and the namespace –record
    • Had a look at the send connector. Everything seemed OK. (Settings were migrated from the old server). However, minor change:
      • Removed the old server from the “source servers” and added the new server. New mail should be sent from the new server (and not from the old one anymore):

               exchange2013_send_connector

    • postfix was also configured to route mail to the new server instead of the old one.
    • Done. Next in line is moving/migrating mailboxes to the new server. Yay.

 

Migrating mailboxes

I started out by following the guide at http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part15.html , more specifically the part about “Pre-Migration Test Migrations”. I moved a couple of test users and after that I sent and received mail to/from these users via Outlook and OWA. No errors were noticed, so I moved over to the real deal and started moving “real” mailboxes. Again, nothing special, I continued following the information at http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part16.html. I did a batch of 10 users at first (users A to E) and all of them were successfully migrated:

exchange2013_migrate_users_a_to_e

(The remaining mailboxes were also successfully migrated).

 

Upgrading Exchange AD Objects

Now it was time to upgrade the AD Objects following information from http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part16.html.

exchange2013_ad_object_upgrade1

exchange2013_ad_object_upgrade2

The first two objects didn’t need an upgrade, they were apparently already automatically upgraded during the migration process. The distribution group in the screenshot that needed an upgrade is a mailing list/distribution group.

 

Public Folders

The old environment didn’t use public folders so luckily there were no need to migrate these. I did run into some problems with Public Folders however. More information in the chapter below.

 

Problems

  • Everything seemed fine, BUT after a couple of days one user didn’t see any new mail in a delegated mailbox she had. She also got the dreaded password prompt every time she started Outlook.
    • Later I heard that also other users were prompted for password
  • This got me thinking about authentication methods. I’ve seen this before. A couple hours of googling still had my thoughts in the same direction, authentication methods.
  • I still wonder why all of this happened though, knowing that ALL mailboxes were now hosted on the new Exchange 2013 server. Why on earth would someone’s Outlook even check for things on the old server? Maybe some old Public Folder references etc. perhaps? Don’t know, the only thing I do know is that it had to be fixed.

Some links about the same dilemma (almost, at least):

http://ilantz.com/2013/06/29/exchange-2013-outlook-anywhere-considerations/
https://gonjer.com/2016/07/02/outlook-prompts-for-credentials-with-exchange-2010-and-20132016-coexistence/
http://blogs.microsoft.co.il/yuval14/2014/08/09/the-ultimate-guide-exchange-2013-and-outlook-password-prompt-mystery/ (L. Authentication Issue)
http://silbers.net/blog/2014/01/22/exchange-20072013-coexistence-urls/

The thing is, I had authentication set to “NTLM” on the new Exchange 2013 server during the coexistence, following the very same guide as with almost everything else in this post. The NTLM setting should be “enough” afaik. One thing that wasn’t mentioned in the guide however, was how the old server was/should be configured. I’m quite sure there are many best practices for Exchange 2007 also, but I myself hadn’t installed that server in the past. Well, hours later, comparing different authentication methods, I finally think I got it right. Here’s the before and after:

exchange2013_get-outlookanywhere_auth_methods

Before: old server IISAuthenticationMethods were only Basic.

exchange2013_set-outlookanywhere_iis_auth_methods

Solution: Adding NTLM to IISAuthenticationMethods (on the legacy server)

exchange2013_get-outlookanywhere_auth_methods_after_change

After: NTLM added

I also removed the “Allow SSL offloading” from the new server for consistency. Not that I know if it helped fixing the problem or not.

exchange2013_remove_ssl_offloading

You get kinda tired from all testing and googling, but hey, at least its working as it should and users aren’t complaining anymore! 🙂

 

  • Shared mailbox dilemma. When you send a message from the shared mailbox, the sent message goes into your own Sent Items folder instead of the shared mailbox sent items.

If the shared mailbox is on Exchange 2010 and only has the Exchange 2010 sent items behavior configured, the settings are not converted to the equivalent Exchange 2013 settings during migration. You will need to manually apply the Exchange 2013 sent items configuration. It is probably best to do that before moving the mailbox. The Exchange 2010 settings are retained though”. Source: http://exchangeserverpro.com/managing-shared-mailbox-sent-items-behaviour-in-exchange-server-2013-and-office-365/

    • Well, no wonder my settings didn’t stick when migrating from 2007 to 2013. I configured the correct settings again:
      • Get-Mailbox mysharedmailbox | Set-Mailbox -MessageCopyForSentAsEnabled $true -MessageCopyForSendOnBehalfEnabled $true

 

Decommission Exchange 2007

Still following the guide, it was now time to decommission the old Exchange 2007 server. First off I left the server turned OFF for a week. No problems were encountered, so I decided to move on with the real decommissioning work.

  • Didn’t need to touch any TMG rules (obviously, since we don’t use TMG)
  • Removed unused Offline Address Books (OAB)
  • Removed old Databases
    • Mailbox Database removal was OK.
    • Public Folders were a whole different story. What a headache. I followed almost every guide/instruction out there. Did NOT WORK. I got the “nice” message: “The public folder database “ExchangeServer\Storage Group\Public Folder Database” contains folder replicas. Before deleting the public folder database, remove the folders or move the replica to another public folder database”. God dammit. We’ve never ever been using Public Folders. Well, luckily I found some useful “fixes” after a while. Some “fixes” that MS won’t mention. Solutions:
    • Removed CN=Configuration,CN=Services, CN=Microsoft Exchange, CN={organisation name i.e First Organisation}, CN=Administrative Groups, CN={Administrative Group name}, CN=Servers, CN={servername}, CN=Information Store, CN={Storage Group Name}, CN={Public Folder Database Name} with ADSIEdit (after I had backed up the key with help from http://www.mysysadmintips.com/windows/active-directory/266-export-active-directory-objects-with-ldifde-before-performing-changes-with-adsi-edit for example).
    • Ran the Get-MailboxDatabase | fl name,pub* –command again, but to my surprise the damn Public Folder Database wasn’t gone. Instead it was in the AD “Deleted Objects”. FFS, it CAN’T be this hard removing the PF Database (reference).
    • Trying to get rid of the deleted object with ldp didn’t work either: “The specified object does not exist”. I was getting even more frustrated.
    • Well, at least now according to EMC I have no active Mailbox Databases. That’s good news, so I can now remove the Storage Groups even though this annoying PF DB reference still exist in AD. I can live with it for now, and hopefully when the Tombstone Lifetime expires, so will this PF DB reference. (That wasn’t the case however, continue reading)
  • Removed Storage Groups, FINALLY:

           exchange2007_storage_group_removal_success

                     exchange2013_arbitration_and_system_mailbox_check

      • System mailboxes are already on the new server. Good.
  • Uninstalled Exchange 2007 from control panel.
    • At least I tried. Of course there were problems. Again.

            exchange2007_uninstall_failiure

Got some tips from https://social.technet.microsoft.com/Forums/exchange/en-US/6469264a-dc33-4b07-8a7c-e681a0f9248f/exchange-setup-error-there-was-a-problem-accessing-the-registry-on-this-computer?forum=exchangesvradminlegacy. Solution was simply to start the Remote Registry service. It now uninstalled nicely.

          exchange2013_get-mailboxdatabase-with-pf

  • Removed legacy DNS entries
  • Firewall guy was informed that the server was decommissioned and all its firewall rules could be removed.
  • Turned off the server and archived it.
  • Happy days. No more Exchange 2007.

 

Security hardening

I always aim to keep my servers secure. This one was no exception, so I was aiming for at least a grade A on the Qualys SSL Labs test, https://www.ssllabs.com/ssltest/. I followed the guide from https://scotthelme.co.uk/getting-an-a-on-the-qualys-ssl-test-windows-edition/ and voilà, grade A was achieved  🙂 I left the HTTP Strict Transport Security policy alone for now however, it will need some more testing.

exchange2013_qualys_ssl_labs_test_grade_A

Advertisements

Installing SharePoint 2013 in a two-tier topology

I got the task of installing SharePoint 2013 for a small business. The SharePoint site won’t be used by that many people simultaneously, so the server load will remain quite small. With that in mind I had to figure out a suitable topology. There are many, many sources on the web describing this, so getting information wasn’t a problem. In the end, I decided to go with a two-tier topology. A single-tier would have been sufficient, but It’s nice to have a separate SQL-server which can be used by other applications/servers as well.

“In a two-tier deployment, SharePoint 2013 components and the database are installed on separate servers. This kind of deployment maps to what is called a small farm. The front-end Web servers are on the first tier and the database server is located on the second tier. In the computer industry, the first tier is known as the Web tier. The database server is known as the database tier or database back-end”.

Source: https://technet.microsoft.com/en-us/library/ee667264.aspx

Another useful link:

https://technet.microsoft.com/en-us/library/cc263199.aspx (you’ll find a nice document/pdf describing Streamlined Topologies for SharePoint 2013). The document states that a two-tier farm is sufficient for up to 10.000 users. More than enough in my case.

My installation is actually based on https://captainofsharepoint.wordpress.com/2013/02/27/the-art-of-installing-sharepoint-2013-in-a-3-tier-topology-part-one/, even though I would call this a two-tier topology and not three. The SQL-guide from this post is not used, as it suggest installing every component (which is unnecessary). Shortly said there are only two servers included in my setup, namely:

  • SharePoint 2013 (more about features and roles later in the document)
  • SQL Server 2014 Standard

I won’t go into the hardware details of the servers themselves because it varies so much from deployment to deployment. It’s easy to scale out with more memory or better/faster SAN-disks if you have the need for it in the near future. It’s also a good idea to read the following information before installing: http://sharepointpromag.com/sharepoint-2010/top-10-sharepoint-2010-configuration-mistakes-and-how-fix-them

 

AD Accounts for SharePoint and SQL

My first task was to create the needed service accounts in Active Directory. There’s a very good site describing the needed accounts at http://www.toddklindt.com/blog/Lists/Posts/Post.aspx?ID=391. I only used

  • sp_install (SharePoint installation)
  • sp_farm (SharePoint Farm Account)
  • ​sql_install (SQL server installation account)
  • ​sql_user (SQL user account)

from the list. Later I created an account named sp_srv for running miscellaneous services. This is more than plenty for such a small deployment. You can read more about service accounts here:

SharePoint 2013 Service Accounts Best Practices Explained:
http://absolute-sharepoint.com/2013/01/sharepoint-2013-service-accounts-best-practices-explained.html (I’m using medium security option)

Initial deployment administrative and service accounts in SharePoint 2013:
https://technet.microsoft.com/en-us/library/ee662513.aspx

SharePoint 2013: Service Accounts:
http://social.technet.microsoft.com/wiki/contents/articles/14500.sharepoint-2013-service-accounts.aspx

 

SQL Server 2014

Next on the checklist was the installation of SQL Server 2014. SQL is a requirement for SharePoint so it should be installed before you install SharePoint itself. I decided to go with http://sharepointpromag.com/sql-server-2012/sql-server-2012-sharepoint-2013-database-server-setup as a base for my installation. Before installing, I also suggest reading the following (you can never be too prepared):

A simple install of SQL Server 2012 for SharePoint Server 2013 or 2010:
http://blogs.msmvps.com/shane/2012/09/17/a-simple-install-of-sql-server-2012-for-sharepoint-server-2013-or-2010/

Instruction Guide for Installing SQL Server 2012 SP1 for SharePoint 2013:
http://www.sharepointdoug.com/2013/02/instruction-guide-for-installing-sql.html

Install SharePoint 2013 – Part 4 SQL Server:
https://www.youtube.com/watch?v=JVBmzG0p76M

Service Account Suggestions for SharePoint 2013:
http://www.toddklindt.com/blog/Lists/Posts/Post.aspx?ID=391

“The SQL Guy” Post #15: Best Practices For Using SQL Server Service Accounts:
http://blogs.technet.com/b/canitpro/archive/2012/02/08/the-sql-guy-post-15-best-practices-for-using-sql-server-service-accounts.aspx

 

Security

After doing some homework (reading articles) I came up with the idea of using SQL with a Named Instance (with SQL-aliases for SharePoint) instead of the Default Instance. I also thought of blocking the default SQL port and using a new static one (configured by SQL aliases). All of this to get better security. I buried this idea however, and instead running with the Default Instance following this guide: http://blogs.technet.com/b/rycampbe/archive/2013/10/14/securing-sharepoint-harden-sql-server-in-sharepoint-environments.aspx. (The server itself is already quite well firewalled by a hardware firewall). Some more information regarding the same matter:

Best practices for SQL Server in a SharePoint Server farm:
https://technet.microsoft.com/en-us/library/hh292622.aspx

Blocking the standard SQL Server ports:
https://technet.microsoft.com/en-us/library/cc262849.aspx#PortProtocolService

Configure SQL Server security for SharePoint 2013 environments:
https://technet.microsoft.com/en-us/library/ff607733.aspx

If one ever decide to use SQL aliases, it’s advisable to read the following document: http://blogs.msdn.com/b/sowmyancs/archive/2012/08/06/install-amp-configure-sharepoint-2013-with-sql-client-alias.aspx

I secured the SQL server using “server isolation” instead.

“Server Isolation can be done several different ways, but the end result is the same: configuring the server to only respond to authorized machines.”

Source: http://blogs.technet.com/b/rycampbe/archive/2013/10/14/securing-sharepoint-harden-sql-server-in-sharepoint-environments.aspx

In my environment, I’m only allowing traffic from the soon-to-be installed SharePoint server (using the above method).

 

Installation

With the security taken care of, it’s finally time for installation! Following the guide I mentioned earlier (http://sharepointpromag.com/sql-server-2012/sql-server-2012-sharepoint-2013-database-server-setup), I went through the steps. I got a firewall warning in the setup (Fig 1), but it was easily fixed by poking a hole in the windows firewall (Fig 2).

sql2014_install_firewall_warning

Fig 1. SQL Server 2014 Setup warning

 

sql2014_firewall_opening

Fig 2. Poking a hole in the firewall (Added the SharePoint server IP).

Next step:

  • Enabled Server Feature: .NET Framework 3.5 (needed for SQL server installation)

Continued the setup:

  • SQL Server Feature selection:
    • Database Engine Services
    • Management Tools – Complete
  • That’s it, no extra crap;

“After selecting SQL Server Feature Installation and clicking Next, a list of SQL Server features is displayed, as shown in Figure X. We really need only one SQL Server feature for SharePoint: Database Engine Services. However, I will also install the Management Tools (Complete) feature, which gives you handy tools such as SQL Server Management Studio. As you browse through the list of features, you might be tempted to check more features than you really need. But unless you’re going to use a particular feature immediately, I don’t recommend installing it. If you want to add a feature later, such as SQL Server Reporting Services, you can just run Setup again and add the feature to your existing instance.”

Source (again): http://sharepointpromag.com/sql-server-2012/sql-server-2012-sharepoint-2013-database-server-setup

Server Configuration/Service Accounts:

  • SQL Server Agent and SQL Server Database Engine: sql_user (the AD account created earlier).

Database Engine Configuration/Specify SQL Server Administrators:

  • myadminaccount and sql_install (the AD account created earlier).

I’m using the default installation paths for SQL as this is a small scale installation.

Installation complete!

 

Tweaking

All tweaks are based on the following articles:

http://sharepointpromag.com/sql-server-2012/configure-sql-server-2012-sharepoint-2013
http://sharepointpromag.com/sql-server-2012/fine-tune-your-sql-server-2012-configuration-sharepoint-2013

  • Max degree of parallelism = 1
  • Maximum server memory 3.5GB (out of 4GB)
  • Model Database’s Recovery Model: simple
  • Compressed backups
  • Also adding the sp_install user to SQL, see below:

“To give the sp_install account the permissions it needs, in SSMS navigate to Security, Logins in Object Explorer. Right-click and select New Login. Under General, type the username and make sure you include the domain. Then on the Server Roles page, shown in Figure 3, select the dbcreator and securityadmin check boxes and verify that the public check box is still selected. Then click OK.”

sql_permissions_for_sp_install

Fig 3. Assigning Permissions to the sp_install Account

“Let me offer a few words of advice about setting the sp_install permissions. SharePoint assumes that those three roles, dbcreator, public, and securityadmin, have the default set of permissions in SQL Server. Don’t alter those permissions. I’ve seen DBAs in very secure environments try to lock down these three roles. Doing so will most certainly break SharePoint in crazy and unusual ways. That might not happen right away, and it might not happen to you when you’re using the interface. It could be a monthly timer job that fails, for instance. Also, don’t change any SQL Server permissions that SharePoint sets. SharePoint is very fussy, and if it sets permissions, it really needs them. Because of SharePoint’s rigidity on its SQL Server permissions, I recommend that you put SharePoint in its own SQL Server instance. SharePoint will thank you, and so will your DBAs.”

Source: http://sharepointpromag.com/sql-server-2012/configure-sql-server-2012-sharepoint-2013

That’s it for SQL, moving on to the SharePoint installation.

 

 

SharePoint Server 2013 installation

I’m being a bit lazy now and just copy/pasting information… why rewrite something that someone has already written (well)?

SharePoint Server 2013 checklist:

Before you begin to install and configure SharePoint 2013, do the following:

Source: https://technet.microsoft.com/en-us/library/cc262243.aspx

Everything in order, let’s continue! (Again, the installation is quite much based on https://captainofsharepoint.wordpress.com/2013/02/27/the-art-of-installing-sharepoint-2013-in-a-3-tier-topology-part-one/)

Well, I didn’t get so far. The prerequisite checker failed with the message: Application Server Role, Web Server (IIS) Role: configuration error.

A suggested solution was to install a hotfix from Microsoft; https://support.microsoft.com/en-us/kb/2765260. This didn’t work however, as the fix was only for Windows Server 2012, NOT the R2 version. Next test was to follow a guide from http://blogs.msdn.com/b/fabdulwahab/archive/2013/08/29/sharepoint-2013-installation-and-configuration-issues.aspx:

Steps to fix (Installing .Net Framework 3.5):

  1. Insert the Windows Server 2012 installation image or DVD
  2. Open a command prompt window (run as Administrator) and run the following:
  3. Dism /online /enable-feature /featurename:NetFX3 /All /Source:D:\sources\SxS /LimitAccess

sharepoint_all_prereq_complete

Fig 4. Success! 🙂

 

Continuing with the setup…

sharepoint_install_server_type

Fig 5. Complete installation (production). Using default file locations (because small scale installation).

Done. The SharePoint Configuration Wizard will then run:

sharepoint_products_configuration_wizard1

Fig 6. Create a new farm

 

sharepoint_products_configuration_wizard2

Fig 7. Database settings. Database server and account settings were discussed in the SQL chapter.

 

sharepoint_products_configuration_wizard3

Fig 8. SharePoint Central Administration Web Application

Port 18811 (or whatever SharePoint chooses for you) must be blocked (outside the domain), otherwise the Central Administration URL will be open for anyone on the Internet.

 

sharepoint_products_configuration_wizard4

Fig 9. Completing the configuration wizard

 

sharepoint_products_configuration_wizard5

Fig 10. Configuration successful!

 

Services

There are A LOT of different services running on a SharePoint server. However, in a small scale environment, you’ll probably only need/use a few of these. I took a look at the old server and compared the services running there. Here’s a screenshot of SharePoint 2010 and its active services:

sharepoint_services_on_old_server2

Fig 11. SharePoint 2010 services

From the screenshot we can see that the following services are running:

  • Central Administration
  • SharePoint Foundation incoming E-Mail
  • SharePoint Foundation Web Application
  • SharePoint Foundation Workflow Timer Service

With this in mind, I tried to keep the services at a minimum on the SharePoint 2013 server as well.

I couldn’t find the exact same ones in 2013, but I decided to go with the following:

sharepoint_services

Fig 12. SharePoint 2013 services

  • Search Service Application
  • State Service
  • Usage and Health data collection

 

After SharePoint had configured itself I was greeted with a message that some services are running with the “wrong” accounts (Fig 13).

sharepoint_service_account_warnings

Fig 13. SharePoint Failing Services

The failing services are:

  • SharePoint Central Administration v4 (Application Pool)
  • SPTimerV4(Windows Service) = Farm
  • AppFabricCachingService (Windows Service)

 

My idea was to run the default SharePoint services with the “sp_farm” account. Other services can be run with the “sp_srv” account if/when needed.

Update: It’s not recommended running the Wizard, instead you should manually configure the settings.

 

You change the account settings in SharePoint –> Central Administration –> Configure service accounts. I changed the farm account to “sp_farm”. Everything more or less broke after that 😦 I had to do some googling to get it up running again.

Solution (before changing farm account to sp_farm):

  • Register the account (sp_farm) as a managed account. To change a managed account password go to Central Admin > Security > Configure Managed Accounts (/_admin/MangedAccounts.aspx). Click the Edit icon next to the account whose password you want to change.

           sharepoint_managed_accounts

           Fig 14. Register Managed Account.

  • Go to the Configure Service Accounts page and Select the Farm Account and set the new managed account
  • Reboot the server.

 

Source: https://social.technet.microsoft.com/Forums/office/en-US/8c330449-b9cd-4ed5-adeb-342466a8a59e/central-administration-no-longer-accessible-by-any-account-after-changing-farm-account-in-sharepoint?forum=sharepointadminprevious

Done. SharePoint is now installed 🙂

 

Security

You shouldn’t use http with SharePoint outside your domain. Instead you should use https (http over SSL). Request a certificate for your SharePoint site from a 3rd party certificate issuer (or similar), and then apply the certificate. You could/should also use http redirection (http –> https) and/or Alternate Access Mappings. You can follow these guides for example:

https://www.digicert.com/ssl-certificate-installation-microsoft-sharepoint-2013.htm
http://www.sharepointconfig.com/2010/03/configuring-a-sharepoint-website-to-allow-ssl-connections/
https://griffindocs.wordpress.com/2013/03/20/sharepoint-2013-how-to-add-ssl-to-a-web-application/
http://blogs.msdn.com/b/fabdulwahab/archive/2013/01/21/configure-ssl-for-sharepoint-2013.aspx

http://blogs.msdn.com/b/sharepoint_strategery/archive/2013/05/27/alternate-access-mappings-explained.aspx
http://blog.blksthl.com/2012/12/03/a-guide-to-alternate-access-mappings-basics-in-sharepoint-2013/
https://technet.microsoft.com/en-us/library/cc261814.aspx
https://technet.microsoft.com/en-us/library/cc263208.aspx

https://social.msdn.microsoft.com/Forums/en-US/eaab487a-bc94-4f06-981b-c62711764367/redirect-http-to-https-for-sharepoint-2013
http://www.jppinto.com/2010/03/automatically-redirect-http-requests-to-https-on-iis7-using-url-rewrite-2-0/
http://pcfromdc.blogspot.fi/2013/10/how-to-redirect-from-http-to-https-with.html
http://wellytonian.com/2014/01/sharepoint-http-https-url-redirect/
http://sharepoint.stackexchange.com/questions/64484/http-to-https-redirection-using-aam
http://www.sharepointbitme.com/?p=8

Test Lab Guide (with modifications): Configure an Integrated Exchange 2013, Lync 2013 and SharePoint 2013 Test Lab

I recently left my old position and started working at our University’s Computing Centre. This also meant changes to my job assignments. I’m now deep diving into Exchange, Lync and SharePoint. All of this will of course take (a lot of) time and I decided to start from scratch with a Test Lab Guide (TLG) – Test Lab Guide: Configure an Integrated Exchange, Lync, and SharePoint Test Lab. (No need to break peoples calendars just yet 🙂 ) This TLG will be the base for all my testing from now on so it’s important to get it working properly. I got the basics up ‘n running quite fast, but then more and more trouble arose. I followed the guide to the letter, but to no avail. Google was short on answers, so the problems needed to be split up into smaller chunks. The main problem was to configure cross-product integration with all the servers. In order for the Exchange, Lync, and SharePoint servers to participate in cross-product scenarios and solutions, they must be configured to trust each other through server-to-server authentication trusts (OAuth). There’s a script (https://technet.microsoft.com/en-us/library/jj204975.aspx) for this, but it didn’t work for me 😦 (Well, might actually work better now when I have better basic understanding of what the script do. It probably also works better now that the certificates are configured correctly).

I got lots of error 401 and/or SSL errors, for example:

Cannot acquire auth metadata document from ‘https://sp1.corp.contoso.com/_layouts/15/metadata/json/’. Error: The remote server returned an error: (401) Unauthorized)

Cannot acquire auth metadata document from ‘https://sp1.corp.contoso.com/_layouts/15/metadata/json/’. Error: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

After much digging around I came to the conclusion that it has to do with the servers’ certificates. I had certificates set up for auto-enrolment, but they somehow didn’t get auto-enrolled on the servers. A small detail, perhaps, but it took me forever to figure this out. I had to manually request the AD published certificates (https://technet.microsoft.com/en-us/library/cc730689.aspx). It’s like begging for trouble playing around with self-signed certificates in an environment like this so I’m glad I got it sorted out.

The problems didn’t disappear even though I was using certificates signed from the Domain CA. The https binding in IIS defaults to whatever it feels like, so you have to change the https site binding to the certificate issued by your CA. Information about IIS site bindings can be found here http://www.orcsweb.com/blog/mark-newnam/how-to-set-up-site-bindings-in-internet-information-services-iis/ or here http://blogs.technet.com/b/chrad/archive/2010/01/24/understanding-iis-bindings-websites-virtual-directories-and-lastly-application-pools.aspx for example. After this was done everything was already much better.

Still, the script wouldn’t work. Step by step I decided to manually try stuff from the script instead. After much fiddling I got it working (don’t even remember how anymore, but it was a lot of trial and error). I did at least the following things (scroll backs in PowerShell and memory dumps from my head):

 

On the Exchange server (the first server that got all the server-to-server trusts working):

ex_lync_partnership

Fig. 1. Partner with Lync

ex_sp_partnership

Fig 2. Partner with SharePoint

 

Checking partnership with Get-PartnerApplication:

ex_get-partnerapplication

Fig 3. Get-PartnerApplication

Everything OK!

Source: https://technet.microsoft.com/en-us/library/jj649094%28v=exchg.150%29.aspx

 

On Lync server:

lync_ex_partnership

Fig 4. Partner with Exchange

lync_sp_partnership

Fig 5. Partner with SharePoint

 

Checking partnership with  Get-CsPartnerApplication:

lync_get-cspartnerapplication

Fig 6. Get-CsPartnerApplication

Everything OK!

Source: https://technet.microsoft.com/en-us/library/jj205253.aspx and https://technet.microsoft.com/en-us/library/jj204975.aspx (This was the failing script for me, so I did it in stages as in the screenshots above). Many more sources also which I can’t remember…

 

On SharePoint server:

sp_ex_partnership

Fig 7. Partner with Exchange (never mind the error, the partnership was already done).

sp_lync_partnership

Fig 8. Partner with Lync

 

Checking Get-SPTrustedSecurityTokenIssuer:

sp_get-sptrustedsecuritytokenissuer

Fig 9. Get-SPTrustedSecurityTokenIssuer

Seems OK! The permissions on the SharePoint 2013 server was already set up at an earlier stage:

At the Windows PowerShell command prompt, type the following commands:

$exchange=Get-SPTrustedSecurityTokenIssuer
$app=Get-SPAppPrincipal -Site http://<HostName> -NameIdentifier $exchange.NameId
$site=Get-SPSite http://<HostName>
Set-SPAppPrincipalPermission -AppPrincipal $app -Site $site.RootWeb -Scope sitesubscription -Right fullcontrol -EnableAppOnlyPolicy

Source: https://technet.microsoft.com/en-us/library/jj655399.aspx, https://technet.microsoft.com/en-us/library/jj670179.aspx

 

Well, I learned a lot yet again. I had to dig into much other stuff as well, but at least it was easily done with Google. The main problems were certificates and server-to-server trust issues. The TLG itself was very nicely written, although it didn’t work as expected for me. None the less everything is now set up and working so I can continue doing all kinds of tests. This test-environment will help me A LOT on my journey with Exchange, Lync and SharePoint. Wish me luck, I know I’m gonna need it 🙂

My next experiment will be to add another exchange server (or two) and use Database Availability Groups (DAGs). (Actually already done using the excellent guide at http://exchangeserverpro.com/exchange-server-2013-database-availability-groups/)

I’ll also be looking at High Availability for the CAS. Stay tuned!

More useful sources (out of the millions I already found):

http://memphistech.net/?p=280
https://digitalbamboo.wordpress.com/2013/09/24/setting-up-exchange-unified-messaging-with-lync-2013-integration-for-voicemail/
http://blog.insidelync.com/2012/08/the-lync-2013-preview-unified-contact-store-ucs/
https://mchahla.wordpress.com/2013/01/12/integrating-lync-server-2013-exchange-server-2013-owa/

Migrating MDT 2013 from Windows Server 2008 R2 to Windows Server 2012 R2

Our Deployment Server (Windows Server 2008 R2) was getting a bit dated and I wanted to enjoy all the new WDS-features of Windows Server 2012 R2 (better UEFI support and enhanced boot image download performance for example). In this migration case I’ll be using physical servers so the migration will be physical-to-physical. The servers are old but they have enough power for the job 🙂

Old server (Fig 1):

  • Fujitsu Siemens Primergy RX300 S2
  • 2 x Intel Xeon 3.20GHz CPUs
  • 4GB RAM
  • 6 x 146GB SCSI HDDs in HW RAID-5

fusi_primergy

Fig 1. Fujitsu Siemens Primergy RX300 S2 (currently running Windows Server 2008 R2).

For the replacement/migration I originally had an identical server which ran Windows Server 2008 R2. It got upgraded to Windows Server 2012 some time ago however. I now tried upgrading it to Windows Server 2012 R2, but it just failed to start properly after the setup process was completed (error message and endless reboots). This was (probably) due to the same problem as before, as there are no drivers for the LSI MegaRAID 320-2E card. It was possible to trick the Windows Server 2008 R2 installation to use the Windows Server 2003 drivers (newest ones available), but this wasn’t the case with a clean install of Windows Server 2012 (R2). However, an upgrade from 2008 to 2012 was/is possible. As a server 2012 to server 2012 R2 upgrade wasn’t possible due to errors, I thought I’d try with the old method. That said, I clean-installed Server 2008 R2, and tried a Server 2012 R2 upgrade instead of the Server 2012 upgrade. This upgrade path wasn’t “allowed” by MS however, so I HAD to do a clean install. Well, I ran into the exact same problems as with the Server 2012 to Server 2012 R2 upgrade. The server won’t start properly, due to a very old and unsupported LSI MegaRAID card. Anyways, it was time to move on as I won’t waste any more time on these old bastards (Fujitsus).

This was not the end of the world though, I just used an old workstation instead of a server. It doesn’t have to be a hard-core server anyhow, because we usually don’t have many simultaneous deployments. It’s basically enough having a normal workstation with decent capacity hard drives in a raid configuration for safety (raid-1 or 5). Normally this sort of thing would be a good candidate for a virtual machine also, but our hosts are a bit full at the moment so it would’ve been even slower 🙂 With that said, here’s the “new server” (Fig 2):

  • Osborne (in an Antec Sonata III case) with Asus PQ5-EM motherboard
  • Intel Core 2 Duo 3.16GHz CPU
  • 4GB RAM
  • 2 x 250GB HDDs in HW RAID-1

sonataIII

Fig 2. “Osborne”

I installed the new “server” from scratch and copied over the needed content from the old server as the progress continued. Steps:

  • Installed Windows Server 2012 R2 from DVD
  • Ran Windows Update to get it fully patched
  • Installed Classic Shell (yes, I still don’t like Metro)
  • Installed Notepad++ for easier editing of configuration files
  • Installed Intel Rapid Storage Technology from Intel.com to monitor raid health (Fig 3). Also configured email alerts.

          intel_RST 

           Fig 3. Intel RST

  • Partitioned/Resized the hard drive: 60GB for the system drive and the rest for data (Deployment Share)
  • Downloaded and installed ADK 8.1 for Windows 8.1 Update
  • Downloaded and installed MDT 2013
  • Enabled WDS Server Role
  • Set a static IP
  • Joined the production domain
  • Copied the whole Deployment Share directory (about 60GB) from the old server to this new one, (D:\DeploymentShare)
  • Shared the new Deployment Share directory via file sharing
  • Started MDT 2013 and chose to Open Deployment Share. Pointed it to the newly copied directory, D:\DeploymentShare (Fig 4)

          open_deployment_share

          Fig 4. Open Deployment Share

  • Successful import! (Fig 5)

          open_deployment_share_success

          Fig 5. Open Deployment Share – success.

  • Went to properties on the Deployment Share and changed UNC path to point to the new server (Fig 6)

          set_depl_share_unc_path

          Fig 6. Set Network (UNC) path

  • Edited D:\DeploymentShare\Control\Bootstrap.ini and changed DeployRoot to point to the new server (Fig 7)

          unc_path_inbootstrap_ini

          Fig 7. DeployRoot in Bootstrap.ini

  • Updated the Deployment Share. MDT-part now done.
  • Started WDS and configured it (kind of self-explanatory so won’t add any steps or screenshots)
  • Added the newly created boot image from MDT (D:\DeploymentShare\Boot\LiteTouchPE_x86.wim) to WDS Boot Images
  • Changed the TFTP Maximum Block size for better boot image download performance (Fig 8). This is a new feature in WDS on Windows Server 2012 (R2). More information: http://technet.microsoft.com/en-us/magazine/dn163597.aspx. Yeah, it’s actually noticeably faster.

          wds_tftp_max_block_size

          Fig 8. TFTP Maximum Block Size

  • Updated the DHCP configuration on our Linux DHPC server to point to the new server
  • Ran a test-deployment – Everything worked as before, except now I’m using a newer version of WDS with better UEFI support 🙂
  • Success yet again!

 

Source: http://lacietech.blogspot.fi/2010/10/migrating-mdt-2010-from-one-server-to.html

As usual, if you are interested in the whole deployment process/configuration, read my earlier post: Deploying Windows 7/8 with Microsoft Deployment Toolkit (MDT) 2012 Update 1 and Windows Deployment Services (WDS)

Installing a new Windows 2012 R2 Remote Desktop Services (RDS) Server

Server installation

I’ve been thinking about renewing our old Terminal Server 2003 for a loooooong time (since the migration of VMware to Hyper-V). We had licensing problems with Windows Server 2008, so I thought I’d give it a new try with Windows Server 2012 R2. Nothing much to it really, here are the steps:

  • Installed our basic image of Windows Server 2012 R2 Standard
  • Applied local security policies (exported from Security Compilcane Manager 3.0) with LocalGPO tool (optional extra protection apart from our GPOs).
    • The tool wasn’t updated for Windows Server 2012 R2, only for Windows Server 2012. The script checks for OS version so it can’t continue if wrong OS is detected. Well, theoretically at least. I edited the script and commented out the line which checks for OS version (in LocalGPO.wsf).
    • I replaced “Call ChkOSVersion” with  “ ‘ Call ChkOSVersion” (that is, commented out the whole check).
    • Ran the script:
    • C:\Program Files (x86)\LocalGPO>cscript LocalGPO.wsf /path:c:\Users\xxx\Downloads\gpo\{3efc9336-94bb-4719-b14d-c34b8121f86f}
      Microsoft (R) Windows Script Host Version 5.8
      Copyright (C) Microsoft Corporation. All rights reserved.

      Modifying Local Policy… this process can take a few moments.

      Applied valid INF from c:\Users\xxx\Downloads\gpo\{3efc9336-94bb-4719-b14d-c34b8121f86f}

      Local Policy Modified!

      Please restart the computer to refresh the Local Policy

  • Worked!
  • Installed the RDS Server role (Fig 1) following a guide from http://www.techieshelp.com/windows-server-2012-install-and-configure-remote-desktop-services/ (my own knowledge was a bit rusty 🙂 )
  • I used Quick Start and Session-based Desktop Deployment (no need for “full” virtual machine-based deployment in our case)
  • Remote Desktop Services RemoteApp (RD Web Access) also got installed, but I won’t use that feature for now.

install_RDS

Fig 1. Installing the roles… yawn.

  • Next step is licensing. A RDS Server requires licenses either per device or per user (Remote Desktop CALs). Our University has some licenses so no problem. I won’t go into the licensing system here, I just say that we’ll be using per machine licenses and that the licenses will be issued from an existing license server in our domain. The procedure for adding the license server was a bit different from the one in the above guide though. In the guide it says “Click the RD licensing icon and either add the server as your license server or point it to your existing license server on the network by entering the server name or IP then click the forward arrow”. I couldn’t point to an existing license server in this step. Oh well, the steps are still quite similar, but I skipped to the next step in the guide. My steps in Fig 2.

rds_licensing

Fig 2. RDS licensing.

  1. TASKS, Edit Deployment Properties
  2. Per Device
  3. Enter license server
  4. Done!

If you are interested in the difference between per user and per device licenses, read http://technet.microsoft.com/en-us/library/cc753650.aspx for example.

Checking that the licensing is working can be done from RD Licensing Diagnoser (I didn’t install RD Licensing Manager, it is already installed on another (license) server). Everything seemed ok! (Fig 3 has the details).

rds_licensing_checking

Fig 3. RD Licensing Diagnoser.

Checking that a client gets a (permanent) license can be done from the license server/RD Licensing Manager (Fig 4).

rds_licensing_checking_per_host

Fig 4. RD Licensing Manager. Five (per device) licenses are currently checked out

 

Client configuration

Now that the licensing part seems to be working just fine, it’s time to test out different clients. We are using Windows, Linux and Mac clients. We also have a small classroom with WySe Thin Clients. For once, there were no problems connecting from Linux clients 🙂 Mac on the other hand was playing hard, but the best (and easiest solution) is to use the newest Remote Desktop Client from Microsoft (available in the app store, https://itunes.apple.com/en/app/microsoft-remote-desktop/id715768417?mt=12). This works out of the box. If you are interested in the problems with the Remote Desktop Client bundled with MS Office 2011 for Mac, read:

http://blog.mikejmcguire.com/2013/10/15/r2-d2-you-know-better-than-to-trust-a-strange-computer-why-doesnt-the-mac-os-x-rdp-client-trust-windows-server-2012-r2-2/

I’ve noticed that there can be problems with Network Level Authentication and RDP however. Even though it’s probably not the best idea to disable it, I’ve done it. It saves you a LOT of headache. Older clients are able to connect without problems when this is disabled.

The WySe Thin Clients had an old firmware which didn’t support the RDP security of the new Windows Server 2012 R2. I’ve upgraded the firmware in them before, so I knew that this was probably the solution (and it was). The steps for upgrading the firmware (short version):

  • Download newest firmware (it’s not free, you have to have an agreement with a vendor)
  • Upload the firmware file (VL10_wnos in our case) to the /var/ftp/wyse/wnos –directory on the ftp server (or similar path on a Windows Server)
  • Change the wnos.ini file to upgrade its firmware on reboot, “Autoload=2” (# Selects firmware update mode: 0=disable,1=upgrade/downgrade, 2=upgrade only)
  • After upgrading the firmware, put the flag back to “0” (disable).
  • Done. Windows Server 2012 R2 compatible WySe Thin Clients 🙂

 

Server configuration

There’s nothing much to do really if the “underlying work” is done. With this I mean using existing group policies and so on. We already have a group policy that enables folder redirection so the users home directories won’t get stored locally (only cached locally). This way you can easily upgrade/reinstall the server without losing user data. I’m not going to write about different GPOs and folder redirections, because it’s outside the scope of this post.

The server was “empty” so I had to Install applications. You shouldn’t install applications “the regular way” on a RDS Server, instead you should use the special install mode called RD-Install. You can do the mode-switch either by command prompt or graphically. Details here: http://technet.microsoft.com/en-us/magazine/ff432698.aspx.  Nothing much to tell about the process of installing applications, about as exciting as watching paint dry 🙂 Just install according to your users needs.

I also installed Classic Shell (yes, I’m a fan) as I don’t want the new users to get annoyed and confused about the new (crappy) metro interface. I was able to force the same settings for every user with policies. There are (group) policy definitions available in C:\Program Files\Classic Shell\PolicyDefinitions.zip (from version 4.0.4 onward). Unzip them to C:\Windows\PolicyDefinitions (or put them on your domain controller) and play around until you find the settings for your own needs.

Settings that I changed for Classic Shell:

  • Show Metro Apps: enabled (no mark in the Show Metro Apps checkbox, which means that the Metro apps aren’t shown).
  • Disable active corners: enabled (Disable active corners: All)
  • Enable accessibility: enabled (no mark in Enable accessibility checkbox)
  • Enable settings: enabled (no mark in checkbox, which means users can’t change Classic Shell settings)
  • Menu items for the Windows 7 style: enabled (I modified the menu to my preference, and then copied the the registry settings from HKCU\Software\IvoSoft\ClassicStartMenu\Settings and copy/pasted the settings here.
  • Shift+Win opens: enabled (Shift+Win opens Windows Start menu, that is, metro)
  • Button look: enabled (Button look: Aero button)
  • Show Start screen shortcut: enabled (no mark in the checkbox)
  • Windows key opens: enabled (Windows key opens: Classic Start Menu)

If you want to skip the metro screen by default (I surely do), use group policy preferences: http://www.petri.co.il/bypass-start-screen-windows-8-1-server-2012-r2.htm. This setting is also available in Classic Shell, but it doesn’t work in a RDS environment without using group policy preferences.

Furthermore I disabled write access to the root of the C:\ drive. Users shouldn’t be able to write stuff on the RDS Server, only in their own profiles. You can do this by going to the root of the C:\ drive and select Properties –> Security. Then under Security choose Advanced. Remove all the rights from “Users” except Read & Execute. See Fig. 5.

adv_sec_settings_c

Fig 5. Advanced Security Settings

You will receive many errors about setting file permissions (Fig 6 for example), which is normal. Anyways, in the end this approach does work, and users aren’t able to write to the root of the c:\ drive. Everything else also works as normal (tested before).

adv_sec_settings_error

Fig 6. Error. Just ignore.

As the old server didn’t have folder redirection, the users files were stored locally on the server 😦 Luckily people didn’t have that much important stuff stored on the server however, so I’ll just keep a backup of the files and copy them to any user that needs their old files. As I already said, the new server is using folder redirection so nothing gets stored locally (and I won’t have this problem again).

 

Gotchas

After the server was in production state I noticed a weird message in Server Manager –> Remote Desktop Services –> Overview. See Fig 7.

RDS_error_server_manager 

Fig 7. The following servers in this deployment are not part of the server pool:
          1. [server_name]
          The servers must be added to the server pool.

I knew what this was about, because I had changed the hostname of the server after testing it. Even though the server was working as it should, it seems that RDS doesn’t like if the hostname/machine name gets changed after the RDS server role is installed. I did a bit of research and the solution is here: http://support.microsoft.com/kb/2910155. Well almost anyway. Trying the Remove-RDServer command gave me “Remove-RDServer : The RD Session Host servers are part of a session collection”. Well, I didn’t know much about this so I thought I’d try removing the collection. “Get-RDSessionCollection” gave me:

PS C:\Windows\system32> Get-RDSessionCollection

CollectionName                 Size       ResourceType              CollectionDescription
———————                  —-        ————                       ———————
QuickSessionCollection         1       RemoteApp programs

I tried removing it with “Remove-RDSessionColletion”. Well, good idea, but it didn’t work. Actually nothing I tried worked until I saw that the server manager displayed the command “Connecting to RD-Connection Broker” and after that nothing happened. I figured the Connection Broker-part is broken. So I uninstalled the connection broker with the command “Remove-WindowsFeature –Name RDS-Connection-Broker” and rebooted. I then reinstalled the connection broker with “Add-WindowsFeature –Name RDS-Connection-Broker”.

Now the Overview window in server manager told me that I didn’t have any Deployments ready, and that I should add one from Server Manager Add roles and features. Well, I did the exact same steps as before (Server installation – Quick Start and so on), and now everything was back to normal. Phew! 🙂 Don’t mess with the computer name.

Gotcha sources:

http://www.virtualizationadmin.com/articles-tutorials/vdi-articles/general/using-powershell-control-rds-windows-server-2012.html
http://blogs.technet.com/b/manojnair/archive/2011/12/02/rds-powershell-tfm-part-iii-configuring-remote-desktop-connection-broker-using-powershell.aspx

 

A final screenshot when “everything is back to normal”:

RDS_Overview_working_as_normal

Fig 8. RDS Overview.

And another one from the Desktop:

RDS_Win2012R2_desktop_screenshot

Fig 9. RDS in action

To sum it up: My deployment is Session-based desktop deployment (quick start) with licensing per device. The licensing server is on another host. RDS Web Access it disabled (even though installed) because I won’t use it at the moment. I also use Classic Shell because I like it 🙂

Happy RDS:ing to you all! 🙂

Upgrading Windows Server 2012 to Windows Server 2012 R2 on Hyper-V hosts

Introduction

I recently did a VMware to Hyper-V migration on two of our virtualization hosts. Everything has been working great, but now that Windows Server 2012 R2 is out I decided to give it a try. There are also cool new features in the new Hyper-V version, so why not upgrade. Here’s a link for 10 great new features in Windows Server 2012 R2 Hyper-V:

http://www.infoworld.com/slideshow/104337/10-great-new-features-in-windows-server-2012-r2-hyper-v-220067

For me, the most interesting features are VM Direct Connect and Copy / Paste between Host and VM via Shared Clipboard. Online VM exporting and cloning & Online VHDX resizing also seems like usable features.

 

 

Upgrade

Enough with the features, it’s time to upgrade! As usual I started out in a virtual test environment. The process was actually very easy and pain-free. Here are my steps:

 

· Inserted/mounted Server 2012 R2 media

· Chose upgrade

· Windows ran its own Windows Compatibility Report

· Told me to reboot

· I also received the following notification:

“Setup has detected one or more virtual machines which are part of a replication relationship. To avoid replication failures, upgrade the Replica server before upgrading the primary server. Once the Replica server is upgraded, any uncommitted failover operations will be committed, test failover virtual machines will be deleted, and the recovery history of the Replica virtual machines will be deleted. Setup has detected that one or more virtual drives are directly attached to physical devices. You might need to reconnect the virtual drives to these devices after the upgrade is complete”.

 

· I did a planned failover on one of the virtual machines on the replica server and then resumed the setup. Now every virtual machine had the same primary server.

· Upgraded the replica server. Upgrade went fine, no problems with virtual drives directly attached to physical devices. (Nothing to worry about, apparently it has to do with connected/mounted .iso to vm’s:

http://social.technet.microsoft.com/Forums/windowsserver/en-US/2d669709-f7d0-4bdc-974b-e0f5ab5552df/warning-compatibility-issue-during-upgrade-von-2008-r2-to-2012)

· One of the virtual machines had problems with replication (Fig 1) after the upgrade: 

 

clip_image002

 

Fig 1. Replication health

 

 

From event viewer:

 

Could not replicate changes for virtual machine ‘server 2012 r2’ as the Replica server ‘hyper1’ on port ‘443’ is not reachable. The operation timed out (0x00002EE2). (Virtual Machine ID 54519BBA-5127-4D5E-B9C3-D988BB6591F7)

 

Nothing critical, the server was just not able to replicate when the other server was down due to the upgrade. I reset the statistics and resumed replication. Everything went back to normal.

 

Now it was time to upgrade the other server, which basically follows the same concept. Just to test, I didn’t even shut down the virtual machines before the upgrade. The Server setup was smart enough to tell me to shut down the virtual machines before attempting an upgrade however. I was also told to restart the server before upgrading. I did both and resumed setup. As this server was the primary server and not the replica server, I could ignore the message about first upgrading the replica server (already done).

 

That’s it. It was really that simple 🙂

 

 

Production environment

The upgrade procedure in the production environment was obviously about the same as in the virtual test environment. Here are my steps:

 

· Shut down the virtual machines on the Hyper-V host

· Paused replication on both Hyper-V hosts. This way I didn’t have to worry about which server was primary and which was replica. (I didn’t find any information about this online so I just tested this theory. Worked great 🙂 )

· Ran the upgrade

· A bit of waiting (about 30 min in total, Fig 2)

 

clip_image004

 

Fig 2. Upgrading…

 

· Everything went fine on the first Hyper-V host

· …and also on the second 🙂

· Upgraded Hyper-V Integration Services in the virtual machines

· Resumed replication and did a reset on the replication statistics. Have some patience, replication will start automatically within a couple of minutes after this.

· Success, everything is back to normal except now I’m running Windows Server 2012 R2 instead of Windows Server 2012 (Fig 3) 🙂

 

clip_image006

 

Fig 3. Windows Server 2012 R2

 

 

Post installation tweaks

As this was an upgrade installation, Windows left its old installation in “Windows.old”. I’m only running the Hyper-V server role on these servers so I don’t need any of the old files as the servers are working just fine. To remove Windows.old, follow these steps:

 

· Enable Disk Cleanup Utility in Windows Server 2012 R2

· Run Disk Cleanup Utility and remove “Previous Windows Installations & Windows Upgrade log files”

· Done

 

Sources:

http://thenrml.wordpress.com/2013/06/26/remove-windows-old-after-using-in-place-upgrade-method-on-windows-server-2012-r2-preview/

http://blogs.technet.com/b/chad/archive/2012/10/08/tip-51-cleanup-on-isle-3-get-back-disk-cleanup-wizard-on-windows-server-2008-amp-2012.aspx

 

 

I also re-enabled ping (Echo request – ICMPv4) in the firewall as it was disabled by the upgrade.

 

 

Migrating from VMware to Hyper-V (including File Server Migration)

Topics covered: 

 

· Windows Server 2012/Hyper-V installation

· Certificate based Hyper-V replication

· Virtual to physical machine conversions

· Virtual to virtual machine conversions

· VMware virtual machine backups

· VMware to Hyper-V conversions

· File Server upgrade/migration

 

 

Introduction

 

I’ve been thinking about upgrading our File Server and Terminal Server for a while. Both the File Server and the Terminal Server are running Windows Server 2003. The servers are running from two different VMware ESXi host servers with identical versions of VMware ESXi installed (v. 3.5 update 4).

 

Current problems:

· VMware ESXi 3.5 doesn’t support a Windows Server version newer than 2008 R2

· I want to use Windows Server 2012 for the File Server and Terminal Server

· Can’t upgrade VMware ESXi to a newer version because our hardware is too old/not compatible with a new(er) version (4.0 –>)

· Hard disk space on current servers is limited –> problems upgrading because all virtual machines can’t be running on just one VMware host during the upgrade. Actually they CAN, but it will be painfully slow as the host with much disk space only has sata disks instead of scsi/sas…

· Expensive to upgrade both servers / buy new hardware

· I want a better way for virtual machine backups. Hyper-V does this nicely with replicas (or live migration without shared storage). Current VMware backup solution is pretty much manual work…

 

Solutions:

· Use Hyper-V instead of VMware – works on older hardware

· Due to hard disk space limitations I’m trying virtual to physical conversion on one of the virtual machines. This will be a temporary place (perhaps permanent…) for the machine while I’m doing the VMware to Hyper-V conversion

· This is a cheap alternative solution. No new hardware needed

 

 

Current hardware

 

VMware host server 1:

· VMware ESXi 3.5 update 4

· HP Proliant DL 180 G5

· Intel Xeon E5405@2.0GHz, 4 cores

· 16GB RAM

· Dual NIC

· 2.0TB (4 x 500GB) hard disk space in raid-5 (SATA)

· 6 virtual machines, 2 active (one will be moved to the other VMware host, the other will be converted to physical)

 

VMware host server 2:

· VMware ESXi 3.5 update 4

· HP Proliant ML 350 G5

· Intel Xeon E5405@2.0GHz, 4 cores

· 18GB RAM

· Dual NIC

· 730GB (5 x 146GB) hard disk space in raid-5 (SAS)

· 3 virtual machines, 2 active (will also remain active)

 

Old server:

· Fujitsu Siemens Primergy RX 300 S2

· 2 x Intel Xeon 3.20GHz CPUs

· 4GB RAM

· 6 x 146GB SCSI hard disks in raid-5

· Dual NIC

 

 

 

Preparation

 

Host server 1 is eating up quite a bit of hard disk space at the moment, mainly because of the MDT/WDS server (deployment server). My approach is converting this virtual machine into a physical machine to save disk space on current host. I’m doing it on this virtual machine as it’s not in use every day and not that critical. If I’m lucky this is the only server I have to make physical, and all the other servers will fit on one of the current VMware host servers (VMware host server 2, the faster one). Update: they did fit 🙂

 

I started off by installing Windows Server Backup server role on our mdt server. After that I run the Backup Once Wizard. I saved the image to a network share, and then copied the image to an external usb hard disk.

In the meantime I had prepared the old physical server (Fujitsu) for this image. I booted the server with the Windows Server 2008 R2 Boot CD and chose advanced installation options. From there I could choose to install the operating system from an earlier created image. At the same time I chose the option to install third party SCSI drivers which in my case was a must. I previously downloaded the LSI MegaRAID SCSI 320-2E drivers and copied them to an usb stick so I can use them during the image restore. After an hour or so the image was restored to the Primergy server. It booted just fine. After this I uninstalled VMware tools. Virtual-to-Physical: Success 🙂

 

 

Backing up and moving VMware virtual machines between hosts

 

Now that I had moved one of the active virtual machines to a physical host, I could start moving the other virtual machines from one host to another with the help of VMware Infrastructure Client and VMware vCenter Converter Standalone (Fig 1).

First off I copied the non-active powered down machines to a USB drive with VMware Infrastructure Client. After that I moved/transferred the powered-on file server to another host (during non-office hours) with the help of VMware vCenter Converter Standalone (Fig 2). This is a nice tool which does the job very well. I’ve seen it called “the poor man’s replication” which is a quite good description for the procedure. You can do the “conversion” from physical-to-virtual (P2V) or from virtual-to-virtual (V2V). The virtual machine can be switched on during the process and it will sync the changes made during the procedure afterwards. After a successful conversion, I shut down the “old” file server and powered on the “new” one on the other server. It booted just fine and I was one step closer replacing VMware ESXi with Hyper-V on this host.  

 

clip_image002

 

Fig 1. Copying virtual machines in VMware Infrastructure Client

 

 

clip_image004

 

Fig 2. VMware vCenter Converter Standalone

 

 

 

Planning for Hyper-V

 

After triple checking all backups and doing lots of homework it’s finally time to wipe one of the VMware hosts and install Microsoft Hyper-V. The installation is rather basic, nothing special. It’s the actual Hyper-V configuration that is the interesting part. I’ve done lots and lots of testing in a virtual environment so now I hopefully know what will suit our needs. First, let me start off by saying that high availability/failover/cluster was not an option as we don’t have any shared storage (SAN, NAS…) available. I was left with the replica feature and Shared-Nothing Live Migration. I’ve tested them both in a virtual environment and they don’t work in the same way. Here are my comments about the two:

 

Replica

 

· Hosts can be in a workgroup or in a domain

· You will decide which virtual machines you will replicate (not move) to the other Hyper-V host

· Replication is done manually, but after that synchronization happens automatically

· The virtual machine has to be switched OFF when using planned failover (moving the virtual machine from one host to the other)

o Will cause a bit of downtime (depending on the size of the vm changes and network speed)

//end of own comments

 

//Begin quote

“In this scenario, we define two “sites”: the “primary site,” which is the location where the virtualized environment normally operates; and the “Replica site,” which is the location of the server that will receive the replicated data. At the primary site, the primary server is the physical server that hosts one or more primary virtual machines. At the Replica site, the Replica server similarly hosts the Replica virtual machines.

 

Once replication is configured and enabled, an initial copy of data from the primary virtual machines must be sent to the Replica virtual machines. We call this “initial replication” and you can choose to accomplish it directly over the network or by copying the data to a physical device and transporting that to the Replica site.

 

When replication is underway, changes in the primary virtual machines are transmitted over the network periodically to the Replica virtual machines. The exact frequency varies depending on how long a replication cycle takes to finish (depending in turn on the network throughput, among other things), but generally replication occurs approximately every 5-15 minutes.

 

You can choose to move operations on any primary virtual machine to its corresponding Replica virtual machine at any time, an action we call “planned failover.” In a planned failover, any un-replicated changes are first copied over to the Replica virtual machine and the primary virtual machine is shut down, so no loss of data occurs. After the planned failover, the Replica virtual machine takes over the workload; to provide similar protection for the virtual machine that is now servicing the workload, you configure “reverse replication” to send changes back to the primary virtual machine (once that comes back online).

 

If the primary server should fail unexpectedly, perhaps as a result of a major hardware failure or a natural disaster, you can bring up the Replica virtual machines to take over the workload—this is “unplanned failover.” In unplanned failover, there is the possibility of data loss, since there was no opportunity to copy over changes that might not have been replicated yet.”

 

Source: http://technet.microsoft.com/en-us/library/jj134172.aspx

 

More information:

 

“With Hyper-V Replica, administrators can replicate their Hyper-V virtual machines from one Hyper-V host at a primary site to another Hyper-V host at the Replica site. This feature lowers the total cost-of-ownership for an organization by providing a storage-agnostic and workload-agnostic solution that replicates efficiently, periodically, and asynchronously over IP-based networks across different storage subsystems and across sites. This scenario does not rely on shared storage, storage arrays, or other software replication technologies”.

 

clip_image006

“For small and medium business, Hyper-V replica is a technically easy to implement and financially very affordable disaster recovery (DR) solution”.

 

Source: http://blogs.technet.com/b/yungchou/archive/2013/04/21/mad-about-windows-server-2012-in-7-ways.aspx

 

//End quote

 

 

Shared-Nothing Live Migration

 

· Hosts require domain membership

· You will decide which virtual machines you will migrate to the other Hyper-V host

· Migration  is done manually

· The virtual machine can remain powered ON during migration

· Zero downtime when live migrating from host to host

· No backup solution, you are just moving the virtual machine from host to host

//end of own comments

 

//Begin quote

“Hyper-V live migration moves running virtual machines from one physical server to another with no impact on virtual machine availability to users. By pre-copying the memory of the migrating virtual machine to the destination server, live migration minimizes the transfer time of the virtual machine. A live migration is deterministic, which means that the administrator, or script, that initiates the live migration determines which computer is used as the destination for the live migration. The guest operating system of the migrating virtual machine is not aware that the migration is happening, so no special configuration for the guest operating system is needed.”

 

Source: http://technet.microsoft.com/en-us/library/hh831435.aspx

 

More information:

 

“Live Migration is the ability to move a virtual machine from one host to another while powered on without losing any data or incurring downtime. With Hyper-V in Windows Server 2012, Live Migration can be performed on VMs using shared storage (SMB share) or on VMs that have been clustered.

Windows Server 2012 also introduces a new shared nothing live migration where it needs no shared storage, no shared cluster membership. All it requires is a Gigabit Ethernet connection between Windows Server 2012 Hyper-V hosts. With shared nothing live migration, a user can relocate a VM between Hyper-V hosts, including moving the VM’s virtual hard disks (VHDs), memory content, processor, and device state with no downtime to the VM. In the most extreme scenario, a VM running on a laptop with VHDs on the local hard disk can be moved to another laptop that’s connected by a single Gigabit Ethernet network cable”.

 

clip_image008

 

“One should not assume that shared-nothing live migration suggests that failover clustering is no longer necessary. Failover clustering provides a high availability solution, whereas shared-nothing live migration is a mobility solution that gives new flexibility in a planned movement of VMs between Hyper-V hosts. Live migration supplements failover clustering. Think of being able to move VMs into, out of, and between clusters and between standalone hosts without downtime. Any storage dependencies are removed with shared-nothing live migration”.

 

Source: http://blogs.technet.com/b/yungchou/archive/2013/04/21/mad-about-windows-server-2012-in-7-ways.aspx

 

//End quote

 

From my tests it seemed that replica was faster than live migration (at least after the initial copy). This isn’t that much of a surprise considering that the whole virtual machine has to be moved during live migration (without shared storage). When using replica there is a check to see what has been changed between the host and destination which makes it faster. Guess you could look at it in the same way as incremental backups once the initial replication has been done.

 

I decided to go with replication for our production environment. It suits our needs better than Shared Nothing Live Migration. It makes no sense moving the VM’s between the hosts instead of having a “spare backup” in the way that replica works. If we had a SAN in our environment, then SNLM would be a considerable option. Also, with replica I don’t have to join the hosts to the domain. There are many debates on whether you should join your hosts in a (separate) domain or if you should keep the hosts in a workgroup. I guess it all comes down to planning and your own needs. In my case I’m going with replicas which don’t require domain membership. It uses certificates instead. 

 

clip_image010

 Fig 3. Migrating/moving a live virtual machine after the setting has been enabled in Hyper-V settings in Hyper-V Manager. Screenshot also illustrates the “Enable Replication” option which has to be manually activated on each virtual machine you want to replicate.

 

 

clip_image012

 Fig 4. Simulating an (unplanned) failover if the primary server brakes

 

I have written more about replication later on in the document (sub-chapter Setting up Replicas)

 

Sources:

http://www.virtualizationadmin.com/articles-tutorials/microsoft-hyper-v-articles/networking/working-replicas-hyper-v-30-part1.html

 

http://blogs.msdn.com/b/mvpawardprogram/archive/2012/11/05/windows-server-2012-hyper-v-high-availability-without-a-san.aspx

 

http://www.youtube.com/watch?v=BDbPcGGTYmw&list=PLB0C0DCC004458603&index=3

http://www.aidanfinn.com/?p=12147

http://www.altaro.com/hyper-v/live-migration-in-hyper-v-explained-part-1/

http://blogs.technet.com/b/yungchou/archive/2013/01/10/hyper-v-replica-explained.aspx

 

 

 

Installing Hyper-V on server 1

 

After all the testing and the theoretical parts comes the fun part – installation on physical hardware 🙂 Fortunately, Windows Server 2012 will detect the drivers for the server’s sas/scsi card (HP Smart Array P400) automatically so I can proceed with a normal installation.

 

I wasn’t in the mood for Server Core version so full version it is. The default layout looks like crap in my opinion (metro), so I start off by enabling Desktop Experience feature from Server Manager. After that I installed classic shell. Aaah, now it’s usable 🙂 After this I enabled Remote Desktop so I can do the rest remotely.

 

Then I’m applying local policies from Microsoft Security Compliance Manager (SCM) 3.0 for maximum security. I’m using the Windows Server 2012 Baseline for Hyper-V. I’m applying

the exported policies with the LocalGPO tool. This step isn’t necessary as we already have a good firewall (at the Computing Centre). The server isn’t visible on the external network either but it doesn’t hurt with some extra protection…

 

Network setup

 

Virtual Switches:

Network1: Management/Remote Access/Replication (internal).

Network2: External Access (University Network)

 

I also unselected “Allow management operating system to share this network adapter” on the external adapter (based on http://www.techrepublic.com/blog/data-center/set-up-your-first-windows-server-2012-hyper-v-host/ )

 

Remote Access

 

I don’t want to use Remote Desktop to manage the virtual machines on the Hyper-V host. Instead I prefer doing it from my workstation with Hyper-V Manager. Some tweaks (actually A LOT) have to be made and here’s an excellent guide:

http://blogs.technet.com/b/jhoward/archive/2008/03/28/part-1-hyper-v-remote-management-you-do-not-have-the-requested-permission-to-complete-this-task-contact-the-administrator-of-the-authorization-policy-for-the-computer-computername.aspx

I did the Remote Access tweaks manually, but I could have used a script which would have been much easier. The script is available from:

http://code.msdn.microsoft.com/windowsdesktop/Hyper-V-Remote-Management-26d127c6

This scenario is the same as using VMware Infrastructure Client on VMware ESXi. Everything is managed from your own workstation. With this done it’s time to prepare the other server for Hyper-V and to create a new virtual machine, the new file server. More of that in the sub-chapter New virtual machine(s).

 

Tweaking

 

I tried to read as many documents/articles as possible for maximizing the performance on the Hyper-V hosts. In the end, I didn’t change much from the defaults. I did however change the virtual machines to use dynamic memory.

Sources:

http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx

 

 

 

Preparing server 2 for Hyper-V / moving vm’s to server 1

 

Server 2 is running three virtual machines at the moment. One of these (file server) will be upgraded and the data migrated. I will write about this later on. The other two VMs (Linux webserver and a Windows Terminal Server) will be converted/moved over to Hyper-V without changes. I’m going to replace the Terminal Server with a brand new Windows Server 2012 later on, but that’s another document/story.

Anyway, back to the conversion/preparation. Here are my steps:

 

· Installed System Center 2012 Virtual Machine Manager with Service Pack 1 on my workstation so I could try their fancy conversion tools. I then followed this guide to be able to connect to my Hyper-V host:

http://technet.microsoft.com/en-us/library/gg610642.aspx

I would manage just fine with only Hyper-V Manager but decided to try scvmm when it’s available for free to us via msdnaa.

· Installed the System Center Virtual Machine Manager Agent on the Hyper-V host

· Too much work – not worth it

· Tried 5nine EasyConverter instead. What a nice piece of software 🙂 Just select your desired VMware vm’s straight from the program and then select the destination Hyper-V server. Can’t get much easier than this, or so I thought…

· No go. Error with conversion process. Didn’t even start. My guess is that it doesn’t work that good with old Linux distros (it supports Linux though). Will give it another try with Windows Server 2003.

· Downloaded StarWind V2V Converter instead from

http://www.starwindsoftware.com/converter. Finally success with conversion.

· Copied the converted vhd over to server 1. Created a new virtual machine and used the vhd as hard disk. Powered it on and it worked, sort of. Did some research on the mighty Google and it turned out you have to add a Legacy Network adapter. Added that and re-configured the network from within CentOS. Success!

· Back to 5nine EasyConverter and had a go with the old Windows Server 2003 Terminal Server.

· Nope, no go. I didn’t want to use my energy on error searching/log reading this time so StarWind V2V Converter it is again. Forgot to uninstall VMware Tools before conversion, but seems to work though. Uninstalled them afterwards with the help of this article:

http://social.technet.microsoft.com/Forums/windowsserver/en-US/6a441588-24fd-4f39-9cbc-5d028fec7c41/hyper-v-and-vmtools-setup-failed-to-detemine-which-vm-product

· Installed Hyper-V Integration Services and everything worked as normal. Success!

· Now it’s time to work on the file server (new virtual machine(s), next chapter)

 

 

New virtual machine(s)

 

After the preparations above I installed the soon-to-become new fileserver. Nothing special, just one virtual hard drive for the OS and another one for the files/data. I decided to try a dynamically expanding disk for the data to save precious disk space. I know this could slow things down but time will tell. I also applied the local policy for Windows Server 2012 Baseline for Fileservers and Member Servers. I installed the roles shown in Fig 5.

 

clip_image014

Fig 5. File Server server role

 

We only have one fileserver so DFS and Namespaces weren’t necessary. I also configured Data Deduplication immediately as I like this new feature in Windows Server 2012.

 

“Data deduplication involves finding and removing duplication within data without compromising its fidelity or integrity. The goal is to store more data in less space by segmenting files into small variable-sized chunks (32–128 KB), identifying duplicate chunks, and maintaining a single copy of each chunk. Redundant copies of the chunk are replaced by a reference to the single copy. The chunks are compressed and then organized into special container files in the System Volume Information folder.”

Sources:

http://technet.microsoft.com/en-us/library/hh831602.aspx

http://technet.microsoft.com/en-us/library/hh831700.aspx

http://blogs.technet.com/b/uspartner_ts2team/archive/2012/10/08/data-deduplication-in-windows-server-2012.aspx

 

Now it was time for data migration from the old file server to the new one. I used Robocopy for this task. My steps:

 

· Had some help from:

http://www.edugeek.net/forums/how-do-you-do/90602-robocopy-help.html

but finally ran with my own switches (from the destination server):

Robocopy.exe \\source_server\dir D:\dir /S /E /Z /R:1 /W:1 /COPYALL /TEE /LOG:d:\dir\log.txt.

· Did the job just right. I tried with the /MIR switch afterwards which also did the job (checks for changed files from previous copy, or “mirrors a share”).

 

After migration I enabled Access Based Enumeration on the shares. Info:

http://heineborn.com/tech/enable-access-based-enumeration-in-windows-server-2012/

 

I also enabled Shadow Copies of the shared folders so I could take advantage of previous versions of files.

 

“Shadow Copies of Shared Folders provides point-in-time copies of files that are located on shared resources, such as a file server. With Shadow Copies of Shared Folders, users can view shared files and folders as they existed at points of time in the past. Accessing previous versions of files, or shadow copies, is useful because users can:

 

· Recover files that were accidentally deleted. If you accidentally delete a file, you can open a previous version and copy it to a safe location.

· Recover from accidentally overwriting a file. If you accidentally overwrite a file, you can recover a previous version of the file. (The number of versions depends on how many snapshots you have created.)

· Compare versions of a file while working. You can use previous versions when you want to check what has changed between versions of a file.”

 

Sources:

http://technet.microsoft.com/en-us/library/cc771305.aspx

http://technet.microsoft.com/en-us/library/cc771893.aspx

 

Now that deduplication was enabled, I had a look at the “statistics”. It was indeed doing its job, here’s a screenshot of the space savings (45% or 69,1GB):

 

clip_image016

 

Fig 6. Deduplication

 

 

 

 

Installing Hyper-V on server 2

 

I have now successfully migrated all of the virtual machines from VMware to Hyper-V. They are all running from server 1 so it’s time to install Hyper-V on server 2. The steps are just about the same as on server 1 so I won’t repeat my steps here. The steps for Remote Access are however a lot easier when you have done the client-part already…

 

 

 

Setting up replicas

 

With both servers running Hyper-V it was now time to think about replica so I could have a disaster plan. I enabled replica on BOTH hosts, Fig 7, (as described earlier in the chapter Planning for Hyper-V). Just to enable replication wasn’t enough because my servers are in a workgroup environment. I did some further configuration with certificates.

 

clip_image018

Fig 7. Enabling Replication

 

Here’s an excellent guide I followed for certificate setup:

“Building Free Hyper-V 3 Replica Step by Step Guide in Workgroup Mode”:

http://jsmcomputers.biz/wp/?p=360

The guide seems to be based on technet’s article “Prepare to Deploy Hyper-V Replica”:

http://technet.microsoft.com/en-us/library/jj134153.aspx

 

I didn’t add any dns-suffixes though; instead I used host names in

c:\windows\system32\drivers\etc\hosts

 

Do remember to enable the replication on both Hyper-V hosts so the replication direction can be reversed.

Source: http://technet.microsoft.com/en-us/library/jj134240.aspx#BKMK_2_4

 

With the certificates done I could finally start replicating. You can choose three different initial replication modes. They are:

 

· Send initial copy over the network

· Send initial copy using external media

· Use an existing virtual machine on the Replica server as the initial copy.

 

I chose to send initial copy using external media instead of using up network bandwidth (and time). Just right-click on the virtual machine you wish to replicate and choose “enable replication”. After that a guide will pop up with the different initial replication modes. When the initial replication is done (in my case), you just eject the usb drive and move it over to the other hyper-v host/replication partner. From that host you right-right click on the same virtual machine and choose Replication -> Import Initial Replica (Fig 8). From here on the replication will happen over the network every 5 minutes (not configurable). I did the same thing with all three of my virtual machines.

 

clip_image020

Fig 8. Import initial replica

 

“From this point onwards the VM is protected and will allow operations like Failover and Test Failover.”

Source: http://blogs.technet.com/b/virtualization/archive/2013/06/28/save-network-bandwidth-by-using-out-of-band-initial-replication-method-in-hyper-v-replica.aspx

 

I noticed that my initial replication was stated as Replication Health: Warning

Turned out that this was nothing to worry about, it will go to normal when initial replication has been done.

“The Replication Health is shown as Warning when the replication is ‘not optimal’. The conditions which would result in a Warning health include:

· 20% of replication cycles have been missed in a monitoring interval – Common reasons which lead to this condition include insufficient network bandwidth, storage IOPS bottleneck on your replica server.

· More than an hour has elapsed since the last send replica (on the primary VM) was sent or the last received replica (on the replica VM) was received – This could result in a loss of more than 60mins worth of data loss if the replica VM is failed over (due to a disaster)

· If Initial Replication has not been completed

· If Failover has been initiated, but ‘reverse replication’ has not been initiated

· If the primary VM’s replication is paused.”

 

Source: http://blogs.technet.com/b/virtualization/archive/2012/06/15/interpreting-replication-health-part-1.aspx

 

Now I did a planned failover (on primary server) from server 1 to server 2, as server 2 was going to be the new “primary home” for the virtual machines (Fig 9). This should NOT be confused with just “failover” (done on secondary server) which is only used in emergency situations (Fig 10).

 

clip_image022

Fig 9. Planned Failover

 

 

clip_image024

Fig 10. Failover

 

The reason for my failover (or “server switching”) is because server 2 is faster than server 1 (SAS HDDs). Here are my (easy) steps:

· Turn off the virtual machine(s) that will be “victim(s)” for planned failover (can’t be turned on, see Fig 11)

 

             clip_image026

Fig 11. Bummer!

 

· Initiate the planned failover

o   Will actually replicate quite fast (only changes)

o   Short downtime

· Primary server changes from server 1 to server 2

· Same thing on all three virtual machines (or just the ones you prefer)

· Reconfigure vm networking on the new host if needed

· Awesomeness and success 🙂

 

 

Here are some more screenshots from failover and replication:

 

clip_image028

Fig 12. Waiting for virtual machine to fail over.

 

 

clip_image030

Fig 13. Health checking on one of the virtual machines. Everything is ok!

 

 

 

That’s it; VMware is now replaced by Hyper-V! I know a lot more now than I did before I started this little project. Best of all, everything is working just the way it was intended 🙂

At the moment I have two of the virtual machines running from server 2 and one from server 1 just to even out the load a bit.

 

 

Stay tuned for more posts! 

 

 

 

Sources

 

Mentioned in the text