Using FarmGuardian to enable HA on Back-ends in Zen Load Balancer

UPDATE 13.8.2018: I published a a new blog post about switching from Zen Load Balancer to HAProxy. The time has come to retire Zen for us…

We’ve been using the Zen Load Balancer Community Edition in production for almost a year now and it has been working great. I previously wrote a blog post about installing and configuring Zen, and now it was time to look at the HA aspect of the back-end servers defined in various Zen farms. Zen itself is quite easy to set up in HA-mode. You just configure two separate Zen servers in HA-mode according to Zen’s own documentation. Well, this is very nice and all, and it’s also working as it should. The thing that confused me the most however (until now), is the HA aspect of the back-ends. I somehow thought that If you specify two back-ends in Zen and one of them fail, Zen automatically uses the backend which is working and marked as green (status dot). Well, this isn’t the case. I don’t know if I should blame myself or the poor documentation – or both. Anyways, an example is probably better. Here’s an example of L4xNAT-farms for Exchange (with two back-ends):

zen_farms_table_overview_2017

I guess it’s quite self-explanatory; we’re Load Balancing the “normal” port 443 + imap and smtp. (All the smtp-ports aren’t open to the Internet though, just against our 3rd party smtp server). The http-farm is used for http to https redirection for OWA.

Furthermore, expanding the Exchange-OWAandAutodiscover-farm:

zen_owa_and_autodiscover_farm2017

 

and the monitoring part of the same farm:

zen_owa_and_autodiscover_farm_monitoring2017

 

This clearly shows that the “Load Balancing-part” of Zen is working – the load is evenly distributed. You can also see that the status is green on both back-ends. Fine. Now one would THINK that the status turns RED if a back-end is down and that all traffic would flow through the other server if this happens. Nope. Not happening. I was living in this illusion though 😦 As I said before, this is probably a combination of my own lack of knowledge and poor documentation. Also, afaik there are no clear “rules” for the farm type you should use when building farms. Zen itself (documentation) seem to like l4xnat for almost “everything”. However, if you’re using HTTP-farms, you get HA on the back-ends out-of-the box. (You can specify back-end response timeouts and checks for resurrected back-ends for example). Then again, you’ll also have to use SSL-offloading with the http-farm which is a whole different chapter/challenge when used with Exchange. If you’re using l4xnat you will NOT have HA enabled on the back-ends out-of-the-box and you’ll have to use FarmGuardian instead. Yet another not-so-well-documented feature of Zen.

FarmGuardian “documentation” is available at https://www.zenloadbalancer.com/farmguardian-quick-start/. Have a look for yourself and tell me if it’s obvious how to use FarmGuardian after reading.

Luckily I found a few hits on Google (not that many) that were trying to achieve something similar:

https://sourceforge.net/p/zenloadbalancer/mailman/message/29228868/
https://sourceforge.net/p/zenloadbalancer/mailman/message/32339595/
https://sourceforge.net/p/zenloadbalancer/mailman/message/27781778/
https://sourceforge.net/p/zenloadbalancer/mailman/zenloadbalancer-support/thread/BLU164-W39A7180399A764E10E6183C7280@phx.gbl/

These gave me some ideas. Well, I’ll spare you the pain of googling and instead I’ll present our (working) solution:

zen_owa_and_autodiscover_farm_with_farmguardian_enabled2017

First off, you’ll NEED a working script or command for the check-part. Our solution is actually a script that checks that every virtual directory is up and running on each exchange back-end. If NOT, the “broken” back-end will be put in down-mode and all traffic will instead flow through the other (working) one. I chose 60 sec for the check time, as Outlook times out after one minute by default (if a connection to the exchange server can’t be established). Here’s the script, which is based on a script found at https://gist.github.com/phunehehe/5564090:

zen_farmguardian_script2017

Big thanks to the original script writer and to my workmate which helped me modify the script. Sorry, only available in “screenshot form”.

You can manually test the script by running ./check_multi_utl.sh “yourexchangeserverIP”  from a Zen terminal:

zen_farmguardian_script_manual_testing_from_terminal2017

The (default) scripts in Zen are located in /usr/local/zenloadbalancer/app/libexec btw. This is a good place to stash your own scripts also.

 

You can find the logs in /usr/local/zenloadbalancer/logs. Here’s a screenshot from our log (with everything working):

zen_farmguardian_log2017

 

And lastly I’ll present a couple of screenshots illustrating how it looks when something is NOT OK:

(These screenshots are from my own virtual test environment, I don’t like taking down production servers just for fun 🙂 )

zen_owa_and_autodiscover_farm_monitoring_host_down2017

FarmGuardian will react and present a red status-symbol. In this test, I took down the owa virtual directory on ex2. When the problem is fixed, status will return to normal (green dot).

 

and in the log:

zen_farmguardian_log_when_failing2017

The log will tell you that the host is down.

 

Oh, as a bonus for those of you wondering how to do a http to https redirect in Zen:

zen_http_to_https_redirect2017

Create new HTTP-farm and leave everything as default. Add a new service (name it whatever you want) and then just add the rules for redirection. Yes, it’s actually this simple. At least after you find the documentation 🙂

And there you have it. Both the Zen servers AND the back-ends working in HA-mode. Yay 🙂

ownCloud 9 on Raspberry Pi 2 with mounted Buffalo NAS

The Linux nerd inside me was screaming for a new RPi project. What to build? What to do? You can’t read any modern IT literature nowadays without stumbling upon the word “cloud”. Well, cloud it is. Owncloud in my case. I guess almost everyone is familiar with this open source cloud software, but for those of you that aren’t you can find information at:

https://owncloud.org/features/

The idea was to access my trusty old Buffalo NAS remotely, without having the need to map network drives etc. Buffalo is actually offering some sort of cloud solution also, but hey, it’s much more fun configuring your own stuff 🙂 The idea is quite simple – The RPi is a front-end for the NAS. Clients are connecting to the RPi, which in turn mount network storage from the NAS. Here’s the setup:

owncloud_buffalo_rpi

Fig 1. RPi + Buffalo NAS

 

Initial questions and ideas

  • Should Raspberry Pi / ownCloud be visible on the Internet? If so, how to secure it properly?
      • Port forwarding with restrictions / reverse proxy?
  • If not visible on the Internet, how should one connect from the outside world?
      • VPN?

I actually decided to go with option 2, not visible on the Internet. My decision is based on the fact that I’m already running a VPN server. It’s one more extra step before getting/synchronizing the files, but I think it’s worth it in the end. Besides, all my other services are behind VPN also.

That said, I STILL configured ownCloud to be “future-proof” even if the server won’t be Internet-facing (with port forwarding etc.) right now. (See the securing ownCloud chapter). Better safe than sorry 🙂

 

Installation

As with almost every project, I usually follow an existing guide. ownCloud is quite a mainstream product, so there are tons and tons of documentation available. The guide that I used as my baseline this time was: http://www.htpcguides.com/install-owncloud-8-x-raspberry-pi-for-personal-dropbox/ . Thanks to the author 🙂 Steps:

  • Followed the guide down to “Now make your ownCloud directory adjust your path as necessary to your mounted hard drive folder..”. As I’ll be using a NAS, it was time for another guide:
    • http://sharadchhetri.com/2013/10/23/how-to-mount-nas-storage-in-owncloud-to-keep-all-users-data/
    • created a share on the NAS (named owncloud). Gave the share read/write access for a user also named “owncloud”.
    • mounted the share on the RPi. The guide uses backslash, but it should be forward slash:
      • e.g. mount -t cifs //192.168.10.20/owncloud /mnt -o username=owncloud,password=owncloud
    • Did not follow step 3 completely because there was no data-directory created during the installation (yet). The installer GUI will look for a data directory though, so this is the time to create and mount it properly.
    • Got the uid of www-data or apache user by using id command:
      • root@owncloud:~# id www-data
        uid=33(www-data) gid=33(www-data) groups=33(www-data),1000(pi)
      • OK. ID is 33
    • Created a local data-directory which will mount the owncloud share (from the NAS).
    • mkdir -p /var/www/owncloud/data
      • changed ownership and permission on the data directory;
        • chmod -R 770 /var/www/owncloud/data ; chown -R www-data:www-data /var/www/owncloud/data
      • Added the following line to /etc/fstab (bottom of file) to make the data directory available in owncloud (setup):
        • //192.168.10.20/owncloud /var/www/owncloud/data cifs user,uid=33,rw,suid,username=owncloud,password=owncloud,file_mode=0770,dir_mode=0770,noperm 0 0
    • Ran mount –a and checked if the NAS got properly mounted. For me it did. In other words the “local” /var/www/owncloud/data was now actually living on the NAS.
    • Finished the configuration via ownClouds own GUI setup. Everything went fine…
    • …however, after a reboot the share was NOT auto mounted 😦
    • I got an error when trying to access owncloud over the web interface: Please check that the data directory contains a file “.ocdata” in its root
      • Scratched my head and wondered what the hell went wrong. I was quite sure it had to do with permissions. Turned out I was right. Short version:
      • http://htyp.org/Please_check_that_the_data_directory_contains_a_file_%22.ocdata%22_in_its_root
        • created an empty .ocdata –file (after I had manually mounted /var/www/owncloud/data directory from the NAS).
        • chmodded that file and the data-directoy with “new rights” ;
          • chmod 777 .ocdata ; chmod 777  /var/www/owncloud/data
          • success, the NAS now got automounted after a RPi-reboot 🙂
    • Everything worked, so moving over to the (optional) security part.

 

Securing ownCloud

Owncloud’s own Security & setup warnings will warn you about things to fix. Here’s a screenshot with almost no security measurements taken. (Actual screenshot is not mine, it’s “borrowed” from the Internet):

owncloud_setup_security_warnings

Fig 2. Before fixing Security & Setup warnings. I’ll write about memory cache in the next chapter (Optimizing ownCloud).

 

… and here’s a screenshot with fixed security (also memcached fixed):

owncloud_setup_security_no_warnings

Fig 3. No security & setup warnings 🙂

 

Basic security

The initial setup guide I followed had already done some basic security measurements (luckily):

  • Redirected all unencrypted traffic to HTTPS (in /etc/nginx/sites-available/owncloud):
      server {
        listen 80;
        server_name htpcguides.crabdance.com 192.168.40.135;
        return 301 https://$server_name$request_uri;  # enforce https
      }
  • Used SSL certificates for https (self-signed):
      ssl_certificate /etc/nginx/ssl/owncloud.crt;
      ssl_certificate_key /etc/nginx/ssl/owncloud.key;
  • Created a virtual host for owncloud, not using “default”.
  • Protected the data directory and files from the internet (outside world):
       # Protecting sensitive files from the evil outside world
        location ~ ^/owncloud/(data|config|\.ht|db_structure.xml|README) {
                 deny all;
  • After this, there were still some things to take care of. Although not visible in Fig 2 above, I also got a warning saying that HTTP Strict Transport Security wasn’t used. Well, a quick googling fixed this. All that was needed was a parameter in the same configuration file as above (/etc/nginx/sites-available/owncloud):

More information about security can be found in ownClouds own Hardening and Security Guidance:

https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/harden_server.html

 

Advanced security

If you are going to deploy a server that’s facing the Internet you have to think about security. The basic security measurements are a must, but what if you want to secure it even more? You certainly want your site protected against DDOS and brute-force attacks, don’t you? Well, here’s where one of my favorites come into play – fail2ban. If you have no idea what I’m talking about I suggest that you read at least the following:

http://www.fail2ban.org/wiki/index.php/Main_Page
https://www.techandme.se/fail2ban-owncloud/ (<- Specifically THIS link)
https://snippets.aktagon.com/snippets/554-how-to-secure-an-nginx-server-with-fail2ban
https://easyengine.io/tutorials/nginx/fail2ban/

If you’re lazy, just follow the above guides and setup accordingly. However, use common sense and double check that everything seems to be in order before going production. I myself created the following jail-files and configuration files, and restarted fail2ban. Everything was working as expected 🙂 My configuration files:

root@owncloud:/etc/fail2ban/filter.d#  ls -la ngin*.*
-rw-r–r– 1 root root 345 May 23 09:28 nginx-auth.conf
-rw-r–r– 1 root root 422 Mar 15  2014 nginx-http-auth.conf
-rw-r–r– 1 root root 280 May 23 09:29 nginx-login.conf
-rw-r–r– 1 root root 300 May 23 09:28 nginx-noscript.conf
-rw-r–r– 1 root root 230 May 23 09:28 nginx-proxy.conf
-rw-r–r– 1 root root 282 May 24 09:53 nginx-req-limit.conf

and

root@owncloud:/etc/fail2ban/filter.d# ls -la own*
-rw-r–r– 1 root root 146 May 23 09:08 owncloud.conf

(Contents of these files can be found in the links above)

…and jail-files:

root@owncloud:/etc/fail2ban# cat jail.local

[owncloud]
enabled = true
filter  = owncloud
port    = https
bantime  = 3000
findtime = 600
maxretry = 3
logpath = /var/www/owncloud/data/owncloud.log

[nginx-auth]
enabled = true
filter = nginx-auth
action = iptables-multiport[name=NoAuthFailures, port=”http,https”]
logpath = /var/log/nginx*/*error*.log
bantime = 600 # 10 minutes
maxretry = 6

[nginx-login]
enabled = true
filter = nginx-login
action = iptables-multiport[name=NoLoginFailures, port=”http,https”]
logpath = /var/log/nginx*/*access*.log
bantime = 600 # 10 minutes
maxretry = 6

[nginx-badbots]
enabled  = true
filter = apache-badbots
action = iptables-multiport[name=BadBots, port=”http,https”]
logpath = /var/log/nginx*/*access*.log
bantime = 86400 # 1 day
maxretry = 1

[nginx-noscript]
enabled = false
action = iptables-multiport[name=NoScript, port=”http,https”]
filter = nginx-noscript
logpath = /var/log/nginx*/*access*.log
maxretry = 6
bantime  = 86400 # 1 day

[nginx-proxy]
enabled = true
action = iptables-multiport[name=NoProxy, port=”http,https”]
filter = nginx-proxy
logpath = /var/log/nginx*/*access*.log
maxretry = 0
bantime  = 86400 # 1 day

[nginx-req-limit]
enabled = true
filter = nginx-req-limit
action = iptables-multiport[name=ReqLimit, port=”http,https”, protocol=tcp]
logpath = /var/log/nginx/*error*.log
findtime = 600
bantime = 7200
maxretry = 10

 

After you’ve created the jails and the configuration files you should restart the fail2ban service, “sudo service fail2ban restart”. You can then have a look in the log file, /var/log/fail2ban.log to see if everything looks ok. For me it did:

2016-05-24 09:55:53,094 fail2ban.jail   [4778]: INFO    Jail ‘ssh’ started
2016-05-24 09:55:53,136 fail2ban.jail   [4778]: INFO    Jail ‘owncloud’ started
2016-05-24 09:55:53,162 fail2ban.jail   [4778]: INFO    Jail ‘nginx-auth’ started
2016-05-24 09:55:53,190 fail2ban.jail   [4778]: INFO    Jail ‘nginx-login’ started
2016-05-24 09:55:53,223 fail2ban.jail   [4778]: INFO    Jail ‘nginx-badbots’ started
2016-05-23 10:28:13,243 fail2ban.jail   [1350]: INFO    Jail ‘nginx-noscript’ started
2016-05-24 09:55:53,249 fail2ban.jail   [4778]: INFO    Jail ‘nginx-proxy’ started
2016-05-24 09:55:53,281 fail2ban.jail   [4778]: INFO    Jail ‘nginx-req-limit’ started

All this configuration is a bit overkill for me as I’m not going to expose the ownCloud server on the Internet. Instead I’m using VPN + ownCloud. This is however a great opportunity to learn about ownCloud and it’s security so it would be a shame NOT to configure it as secure as possible 🙂 (The ssh-jail is a very nice bonus if you’re also forwarding that port towards the Internet).

 

Optimizing ownCloud

After all the security hardening stuff it was time to look at optimization. The initial guide includes some optimization and it installs all the PHP modules needed for memory caching. I’m quite sure I could optimize ownCloud much more, but there’s no need to overdo it in such a small home environment. In other words, memory caching is enough in my case. More info about memory caching can be found in ownClouds own documentation: https://doc.owncloud.org/server/8.1/admin_manual/configuration_server/caching_configuration.html

Even though this topic is already covered at the bottom in the initial guide (http://www.htpcguides.com/install-owncloud-8-x-raspberry-pi-for-personal-dropbox/), I’ll write a summary here:

  • Edit /var/www/owncloud/config/config.php
  • At the bottom of the file, add:
    'memcache.local' => '\OC\Memcache\Memcached',
    'memcache.distributed' => '\OC\Memcache\Memcached',
    'memcached_servers' => 
    array (     0 => 
    array (     0 => '127.0.0.1',     1 => 11211,     ),     ),
  • Check that memcached is running:

      root@owncloud:# netstat -nap | grep memcached
      tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      457/memcached
      udp        0      0 127.0.0.1:11211         0.0.0.0:*                           457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7961     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7955     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7753     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7967     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7960     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7954     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7966     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7964     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7958     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7963     457/memcached
      unix  3      [ ]         STREAM     CONNECTED     7957     457/memcached

      Source: http://stackoverflow.com/questions/1690882/how-do-i-see-if-memcached-is-already-running-on-my-chosen-port

  • Done. You should now have a pretty safe (and optimized) environment to play with 🙂

 

ownCloud works pretty much the same way as Dropbox and others. Here’s a screenshot from a Mac client and one test-file synchronized:

owncloud_screenshot_mac_client

Fig 4. ownCloud + Mac.

 

Useful configuration paths

I’ll finish this blog post with a short summary of useful configuration paths. Here you go:

/var/www/owncloud/config/config.php (https://doc.owncloud.org/server/8.1/admin_manual/configuration_server/config_sample_php_parameters.html)
/var/www/owncloud/data
/var/www/owncloud/data/owncloud.log
/etc/nginx/sites-available/owncloud
/etc/fstab

/etc/fail2ban/jail.local
/etc/fail2ban/filter.d/*.conf
/var/log/fail2ban.log

Running an OpenVPN Server on the Raspberry Pi

Time for a nice little Raspberry Pi project again, this time an OpenVPN Server! 🙂 My router at home is a bit oldish and can’t handle custom firmwares like DD-WRT or OpenWrt. It most certainly can’t handle VPN connections either. With these facts in mind, I thought I’d build my own vpn server with a Raspberry Pi. It was a little bit more complex than I thought, but I actually got it up ‘n running in a few days. It’s better to be safe than sorry, so please do some reading before you build. Here are a couple of good starting points which helped me in the right direction:

http://readwrite.com/2014/04/10/raspberry-pi-vpn-tutorial-server-secure-web-browsing (Mine is a modified version of this guide using server-bridge from the next link)
http://www.emaculation.com/doku.php/bridged_openvpn_server_setup
http://readwrite.com/2014/04/11/building-a-raspberry-pi-vpn-part-two-creating-an-encrypted-client-side#awesm=~oB89WBfWrt21bV
https://community.openvpn.net/openvpn/wiki/323-i-want-to-set-up-an-ethernet-bridge-on-the-1921681024-subnet-existing-dhcp
https://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html
https://openvpn.net/index.php/open-source/documentation/howto.html#config

My goals: Build a VPN Server using an existing DHCP server (router) on my internal LAN. (This is done by bridging btw). Port forwarding is used on the router so the vpn server is exposed to the Internet/WAN. Other goals was to keep the server rather secure. I built the server on top of Raspbian, which I’ll assume you can install by now. So here goes – the steps for building an OpenVPN server on a Raspberry Pi. Note: This is a VERY short guide, there are urls to longer explanations in my text. (I don’t feel like re-writing existing information).

  • Update the Pi to the newest version;
    • sudo apt-get update && sudo apt-get upgrade –y (for software)
    • sudo rpi-update (for kernel and firmware)
  • Install OpenVPN and bridge utils; sudo apt-get install openvpn bridge-utils –y
  • Become root: sudo –s
  • Generate keys:
  • Copy openvpn examples from /usr/share/doc/openvpn/examples/easy-rsa/2.0 to /etc/openvpn/easy-rsa
    • cp –r /usr/share/doc/openvpn/examples/easy-rsa/2.0 /etc/openvpn/easy-rsa
    • edit /etc/openvpn/easy-rsa/vars file
      • find and change EASY_RSA variable to export EASY_RSA=”/etc/openvpn/easy-rsa
  • Build CA Certificate and Root CA Certificate:
    • change dir to /etc/openvpn/easy-rsa
    • source ./vars –file: source ./vars
    • remove previous keys if necessary; ./clean-all
    • build your CA: ./build-ca
    • Create server credentials: ./build-key-server server
  • Create Diffie-Hellman key exchange: ./build-dh
  • Use OpenVPN’s built in DoS attack protection (generate a static HMAC key): openvpn –-genkey –-secret keys/ta.key
  • That’s it for the server, now we will create keys for the clients:
    • ./build-key jocke (leave challenge password blank and sign the certificate)
    • go to the keys directory: cd /etc/openvpn/easy-rsa/keys
    • (optional) use des3 encryption on the key: openssl rsa -in jocke.key -des3 -out jocke.3des.key

Until now, I’ve been following the guide from http://readwrite.com/2014/04/10/raspberry-pi-vpn-tutorial-server-secure-web-browsing. I’m going to use bridging however, so step 9 and forward aren’t suitable for me. Instead I followed the guide from http://www.emaculation.com/doku.php/bridged_openvpn_server_setup , and the “VPN Setup” –part. It worked really well, I just changed the IP’s to match my own network configuration/subnet. Also remember to change the settings mentioned in the “Final Settings in the VM” –part. My server.conf is a mix of both guides. Here it is:

port 1194
proto udp
dev tap0
ca /etc/openvpn/easy-rsa/keys/ca.crt
cert /etc/openvpn/easy-rsa/keys/xxxx.crt # SWAP WITH YOUR CRT NAME
key /etc/openvpn/easy-rsa/keys/xxxx.key # SWAP WITH YOUR KEY NAME
dh /etc/openvpn/easy-rsa/keys/dh1024.pem # If you changed to 2048, change that here!
remote-cert-tls client
ifconfig-pool-persist ipp.txt
server-bridge 192.168.11.1 255.255.255.0 192.168.11.201 192.168.11.254
client-to-client
keepalive 10 120
tls-auth /etc/openvpn/easy-rsa/keys/ta.key 0
cipher AES-128-CBC
comp-lzo
persist-key
persist-tun
status /var/log/openvpn-status.log 20
log /var/log/openvpn.log
verb 3

You should now have created the user keys following the above commands/guides. Instead of copying multiple key files to the clients, I prefer using the script from http://readwrite.com/2014/04/11/building-a-raspberry-pi-vpn-part-two-creating-an-encrypted-client-side#awesm=~oB89WBfWrt21bV. It’s very convenient and produces an .ovpn file which you can import in different OpenVPN clients. I’m too lazy to copy/paste, all I can say is that the default.txt –file should match the server.conf (use the same options).

Also remember to set up the Linux firewall to permit packets to flow freely over the newly created tap0 and br0 interfaces:

iptables -A INPUT -i tap0 -j ACCEPT
iptables -A INPUT -i br0 -j ACCEPT
iptables -A FORWARD -i br0 -j ACCEPT

and to make this information persistent/permanent:

  • sudo bash -c ‘iptables-save > /etc/network/iptables’
    • add a line to /etc/network/interfaces so the changes will become persistent;
      • pre-up iptables-restore < /etc/network/iptables (add it after the line iface eth0 inet dhcp)

 

The configuration part is now done. It’s always a good idea to look at the log files for better understanding and to check that everything is working. In my case everything looks (almost) fine:

cat /var/log/openvpn.log

Looking at the log when VPN-server starts (Nevermind the dates. Username and IPs also censored):

Tue Dec  2 12:21:02 2014 OpenVPN 2.2.1 arm-linux-gnueabihf [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Oct 12 2013
Tue Dec  2 12:21:02 2014 NOTE: when bridging your LAN adapter with the TAP adapter, note that the new bridge adapter will often take on its own IP address that is different from what the LAN adapter was previously set to
Tue Dec  2 12:21:02 2014 NOTE: OpenVPN 2.1 requires ‘–script-security 2’ or higher to call user-defined scripts or executables
Tue Dec  2 12:21:02 2014 Diffie-Hellman initialized with 1024 bit key
Tue Dec  2 12:21:02 2014 Control Channel Authentication: using ‘/etc/openvpn/easy-rsa/keys/ta.key’ as a OpenVPN static key file
Tue Dec  2 12:21:02 2014 Outgoing Control Channel Authentication: Using 160 bit message hash ‘SHA1’ for HMAC authentication
Tue Dec  2 12:21:02 2014 Incoming Control Channel Authentication: Using 160 bit message hash ‘SHA1’ for HMAC authentication
Tue Dec  2 12:21:02 2014 TLS-Auth MTU parms [ L:1590 D:166 EF:66 EB:0 ET:0 EL:0 ]
Tue Dec  2 12:21:02 2014 Socket Buffers: R=[163840->131072] S=[163840->131072]
Tue Dec  2 12:21:02 2014 TUN/TAP device tap0 opened
Tue Dec  2 12:21:02 2014 TUN/TAP TX queue length set to 100
Tue Dec  2 12:21:02 2014 Data Channel MTU parms [ L:1590 D:1450 EF:58 EB:135 ET:32 EL:0 AF:3/1 ]
Tue Dec  2 12:21:02 2014 UDPv4 link local (bound): [undef]
Tue Dec  2 12:21:02 2014 UDPv4 link remote: [undef]
Tue Dec  2 12:21:02 2014 MULTI: multi_init called, r=256 v=256
Tue Dec  2 12:21:02 2014 IFCONFIG POOL: base=192.168.11.201 size=54, ipv6=0
Tue Dec  2 12:21:02 2014 ifconfig_pool_read(), in=’xxxxx,192.168.11.201′, TODO: IPv6
Tue Dec  2 12:21:02 2014 succeeded -> ifconfig_pool_set()
Tue Dec  2 12:21:02 2014 IFCONFIG POOL LIST
Tue Dec  2 12:21:02 2014 xxxxx,192.168.11.201
Tue Dec  2 12:21:02 2014 Initialization Sequence Completed

Looking at the (same) log when a client connects:

Mon Dec  1 15:52:13 2014 MULTI: multi_create_instance called
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Re-using SSL/TLS context
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 LZO compression initialized
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Control Channel MTU parms [ L:1590 D:166 EF:66 EB:0 ET:0 EL:0 ]
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Data Channel MTU parms [ L:1590 D:1450 EF:58 EB:135 ET:32 EL:0 AF:3/1 ]
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Local Options hash (VER=V4): ‘xxxxxx’
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Expected Remote Options hash (VER=V4): ‘xxxxxx’
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 TLS: Initial packet from [AF_INET]x.x.x.x:51208, sid=40186f54 2801328f
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 VERIFY OK: depth=1, /C=XX/ST=XX/L=XXX/O=XXXXXXXXX/OU=xxx/CN=xxxxx/name=XX/emailAddress=mail@host.domain
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Validating certificate key usage
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 ++ Certificate has key usage  0080, expects 0080
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 VERIFY KU OK
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Validating certificate extended key usage
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 ++ Certificate has EKU (str) TLS Web Client Authentication, expects TLS Web Client Authentication
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 VERIFY EKU OK
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 VERIFY OK: depth=0, /C=XX/ST=XX/L=XXX/O=XXXXXXXXX/OU=xxx/CN=xxxxx/name=XX/emailAddress=mail@host.domain
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Data Channel Encrypt: Cipher ‘AES-128-CBC’ initialized with 128 bit key
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Data Channel Encrypt: Using 160 bit message hash ‘SHA1’ for HMAC authentication
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Data Channel Decrypt: Cipher ‘AES-128-CBC’ initialized with 128 bit key
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Data Channel Decrypt: Using 160 bit message hash ‘SHA1’ for HMAC authentication
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Mon Dec  1 15:52:13 2014 x.x.x.x:51208 [xxxxx] Peer Connection Initiated with [AF_INET]x.x.x.x:51208
Mon Dec  1 15:52:13 2014 xxxxx/x.x.x.x:51208 MULTI_sva: pool returned IPv4=192.168.11.201, IPv6=bccd:800:8ced:200:14c2:700:8427:5201
Mon Dec  1 15:52:16 2014 xxxxx/x.x.x.x:51208 PUSH: Received control message: ‘PUSH_REQUEST’
Mon Dec  1 15:52:16 2014 xxxxx/x.x.x.x:51208 send_push_reply(): safe_cap=960
Mon Dec  1 15:52:16 2014 xxxxx/x.x.x.x:51208 SENT CONTROL [xxxxx]: ‘PUSH_REPLY,route-gateway 192.168.11.1,ping 10,ping-restart 120,ifconfig 192.168.11.201 255.255.255.0’ (status=1)
Mon Dec  1 15:52:16 2014 xxxxx/x.x.x.x:51208 MULTI: Learn: 7e:0a:47:c6:2a:76 -> xxxxx/x.x.x.x:51208
Mon Dec  1 15:52:29 2014 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)
Mon Dec  1 15:52:39 2014 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)
Mon Dec  1 15:52:49 2014 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)
Mon Dec  1 15:53:20 2014 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)
Mon Dec  1 15:53:30 2014 read UDPv4 [ECONNREFUSED]: Connection refused (code=111)
Mon Dec  1 15:56:16 2014 read UDPv4 [EHOSTUNREACH]: No route to host (code=113)

I was wondering what the ECONNREFUSED was all about. A little bit of googling got me on the right track. The solution was to add the line

push “explicit-exit-notify 3” to the server.conf.

After this trick the errors went away.

Source: https://forums.openvpn.net/topic10674.html

 

Finally I thought I’d read about server hardening so I won’t leave any holes open for hackers:

https://community.openvpn.net/openvpn/wiki/Hardening
https://openvpn.net/index.php/open-source/documentation/howto.html#security
http://darizotas.blogspot.fi/2014/04/openvpn-hardening-cheat-sheet.html

Most of these options are already in use so I’m feeling safe enough. This is after all only a little hobby server and not a big company server. I’ve learned tons and tons of stuff and I now have a working OpenVPN server at home 🙂

(A rather secure) Raspberry Pi Puppy Cam

My girlfriend recently got a puppy (Fig 2), so I decided to build a puppy cam (Fig 1) for her/us 🙂 I already had a spare Raspberry Pi with all the needed hardware laying around.

RaspberryPi

Fig 1. Raspberry Pi with Logitech QuickCam Fusion

minni2

Fig 2. The camera victim (Flat-Coated Retriever)

 

Components:

  • Raspberry Pi Model B
  • Clear Raspberry Pi Case from www.modmypi.com
  • 16GB SD card
  • Logitech QuickCam Fusion (old crap capable of 640×480)
  • D-Link DWA-121 802.11n Wireless N 150 Pico Wi-Fi-adapter
  • Deltaco AC adapter, 230V – 5V, 1A, Micro USB, 1.8m
  • Raspbian (Wheezy), Release 2014-01-07
  • (for setup: HDMI-to-DVI adapter, usb hub, usb mouse + keyboard)

 

Steps:

  • Installed Raspbian on a 16GB SD-card following the guide from https://www.andrewmunsell.com/blog/getting-started-raspberry-pi-install-raspbian
  • Configured some default options like password, system locale and so on after first start-up. Also enabled SSH (and disabled root login over ssh in /etc/ssh/sshd_config, (PermitRootLogin no)).
  • Followed a nice guide from http://www.codeproject.com/Articles/665518/Raspberry-Pi-as-low-cost-HD-surveillance-camera, with some modifications;
    • I’m not using the Raspberry Pi camera module, instead an old Logitech QuickCam Fusion, http://www.logitech.com/en-us/support/278?crid=405
    • updated the Raspberry Pi, sudo rpi-update
    • updated all packages, sudo apt-get update, sudo apt-get upgrade
    • Configured Wi-Fi following http://mattluurocks.com/index.php/raspbmc-dlink-dwa121-usb-pico-adapter
    • Checked that camera was detected (it was):

        root@xxx: /home/xxxx# lsusb
        Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
        Bus 001 Device 004: ID 046d:08c1 Logitech, Inc. QuickCam Fusion
        Bus 001 Device 005: ID 2001:3308 D-Link Corp. DWA-121 802.11n Wireless N 150 Pico

    • Installed the motion detection software:
      • sudo apt-get install motion
    • enabled motion deamon so it auto-starts in /etc/default/motion. Changed the line to: start_motion_daemon=yes
    • chmodded the files according to the above mentioned guide.
    • also edited /etc/motion/motion.conf following the guide, but managed to brake my own configuration 🙂 (motion process killed itself after a couple of seconds…)
      • A bit of detective work in /var/log/messages revealed:

          motion: [1] cap.card: “UVC Camera (046d:08c1)”
          motion: [1] cap.bus_info: “usb-bcm2708_usb-1.2”
          motion: [1] cap.capabilities=0x84000001
          motion: [1] – VIDEO_CAPTURE
          motion: [1] – STREAMING
          motion: [1] Config palette index 8 (YU12) doesn’t work.
          motion: [1] Supported palettes:
          motion: [1] 0: MJPG (MJPEG)
          motion: [1] 1: YUYV (YUV 4:2:2 (YUYV))
          motion: [1] Selected palette YUYV

      • changed the value to v4l2_palette 2 in motion.conf. Success! Motion now keeps running.
    • Made a directory for captures, mkdir /home/xxxx/captures , and pointed the configuration to that dir, “target_dir /home/xxxx/captures”
    • Had a look at http://www.lavrsen.dk/foswiki/bin/view/Motion/ConfigFileOptions
      • my own changes if someone is interested (along with the other changes above):
        • daemon on
        • width 640, height 480
        • framerate 5
        • pre_capture 2
        • post_capture 2
        • max_mpeg_time 600
        • output_normal off (I don’t need saved pictures, only videos)
        • ffmpeg_video_codec msmpeg4
        • webcam_port 8080
        • webcam_localhost off
        • control_port 8081
        • control_localhost off
        • control_authentication xxx:xxx

Setting up a cron job for motion:

I don’t want to have the cam running 24/7 so I decided to setup a cron job to fix that. Steps:

  • changed to root user instead of “xxxx” user, “sudo –s”
  • edited the crontab file, “crontab –e”
    • pasted the following:

      30 8 * * * /usr/bin/motion
      30 15 * * * /usr/bin/killall motion

    • Check the file/cron list with “crontab –l”

This will start motion at 8.30AM and shut it down at 3.30PM (daily)

Cron source: http://superuser.com/questions/169654/how-to-schedule-motion-detection

 

Securing (SSH on) the RPi

Because I forward the SSH port to the WAN side, I want to stay safe. (Yes, allowing to connect only with ssh keys is the safest method, I know, but a bit over the top for this project. Instead I’ll focus on securing ssh overall). Raspbian doesn’t seem to understand TCP wrappers (hosts.allow & hosts.deny), so I decided to use iptables instead. (Yes, I could have used another port than 22 also, but if some hacker want to get it in… they will anyhow). After a bit of fiddling I got it working.

At first, I installed a package called fail2ban (www.fail2ban.org), sudo apt-get install fail2ban. It automatically bans IP addresses that are failing to authenticate over SSH too many times. (The default fail2ban-options for SSH are OK for me, maxRetry = 6). This is the first layer of protection. After this I added some iptable rules for additional protection:

root@xxx:/home/xxx
iptables -A INPUT -j ACCEPT -m state –state ESTABLISHED,RELATED (read comment in sources below, first link)
iptables -A INPUT -p tcp –dport 80 -m state –state NEW -j ACCEPT (open up port 80 for nginx web server)
iptables -A INPUT -p tcp –dport 8080 -m state –state NEW -j ACCEPT (open up port 8080 for motion’s own web server)
iptables -A INPUT -p icmp -m icmp –icmp-type 8 -j ACCEPT (open up for ping)
iptables -I INPUT -p tcp -m tcp -s xxxx.xxx.xxx.xx –dport 22 -j ACCEPT (SSH: my work pc)
iptables -I INPUT -p tcp -m tcp -s xxxx.xxx.xxx.xx –dport 22 -j ACCEPT (SSH: another linux login server)
iptables -I INPUT -p tcp -m iprange –src-range 192.168.0.100-254 –dport 22 -j ACCEPT (SSH: access from internal network)
iptables -I INPUT -p tcp -m tcp -s 0.0.0.0/0 –dport 22 -j DROP (SSH: deny all the rest)
iptables -P INPUT DROP (block all inbound traffic not accepted by a rule)

Sources:
http://virtualitblog.blogspot.fi/2013/05/installing-iptables-on-raspberry-pi.html
http://blog.self.li/post/63281257339/raspberry-pi-part-1-basic-setup-without-cables
http://www.skullbox.net/iptables-specific-ip.php
http://serverfault.com/questions/161401/how-to-allow-a-range-of-ips-with-iptables

Then we should save the rules so they become persistent:
  • sudo bash -c ‘iptables-save > /etc/network/iptables’
  • then adding a line to /etc/network/interfaces so the changes will be persistent:
    • pre-up iptables-restore < /etc/network/iptables (add it after the line iface eth0 inet dhcp for ethernet connection or after iface wlan0 inet dhcp if on wlan)
  • Changes are now permanent

Source: http://www.simonthepiman.com/how_to_setup_your_pi_for_the_internet.php

We can check what the current iptables look like by looking at the (auto-created) file /etc/network/iptables:

root@xxxx:/home/xxxx# cat /etc/network/iptables
# Generated by iptables-save v1.4.14 on Tue Jun  3 15:53:59 2014
*filter
:INPUT DROP [27:4572]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [90:10559]
:fail2ban-ssh – [0:0]
-A INPUT -p tcp -m multiport –dports 22 -j fail2ban-ssh
-A INPUT -s xxxx.xxx.xxx.xx/32 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -p tcp -m iprange –src-range 192.168.0.100-254.0.0.0 -m tcp –dport 22 -j ACCEPT
-A INPUT -s xxxx.xxx.xxx.xx/32 -p tcp -m tcp –dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp –dport 22 -j DROP
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp –dport 80 -m state –state NEW -j ACCEPT
-A INPUT -p tcp -m tcp –dport 8080 -m state –state NEW -j ACCEPT
-A INPUT -p icmp -m icmp –icmp-type 8 -j ACCEPT
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Tue Jun  3 15:53:59 2014

and the same with iptables –L command:

root@xxx:/home/xxxx# iptables -L
Chain INPUT (policy DROP)
target     prot opt source                    destination
fail2ban-ssh  tcp  —  anywhere          anywhere             multiport dports ssh
fail2ban-ssh  tcp  —  anywhere          anywhere             multiport dports ssh
ACCEPT     tcp  —  xxxxx.xxx.fi             anywhere             tcp dpt:ssh (my workstation)
ACCEPT     tcp  —  anywhere               anywhere             source IP range 192.168.0.100-254.0.0.0 tcp dpt:ssh
ACCEPT     tcp  —  xxxxx.xxx.fi             anywhere             tcp dpt:ssh (another linux login server)
DROP       tcp  —  anywhere                anywhere             tcp dpt:ssh
ACCEPT     all  —  anywhere                anywhere             state RELATED,ESTABLISHED
ACCEPT     tcp  —  anywhere               anywhere             tcp dpt:http state NEW
ACCEPT     tcp  —  anywhere               anywhere             tcp dpt:http-alt state NEW
ACCEPT     icmp —  anywhere             anywhere             icmp echo-request

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain fail2ban-ssh (2 references)
target     prot opt source               destination
RETURN     all  —  anywhere             anywhere
RETURN     all  —  anywhere             anywhere

 

Installing NGINX Web Server (for HTTP Authentication)

As the basic installation of motion doesn’t support authentication for the stream, I needed some other way of protecting it. My solution was to use NGINX Web server for authentication. I won’t use a reverse proxy to redirect directly to the stream, as I need a “middle page” with some html code (so I can watch the stream in any browser). More of that in the chapter “Motion MJPEG “fix” for any browser”.

To be able to watch the puppy cam from anywhere on the Internet and not only from your own LAN, you have to use port forwarding on your router. I won’t go into the details here as there are many different guides available on the net. That said, I forwarded port 80, 8080 and 22 from the internal network to the external network. (Yes, I’m using these default ports as a hacker will find the correct ports to hack anyway). I’ve also registered a  (free) dynamic-to-static dns address on www.noip.com. You can enter this noip-information into the routers configuration, but the configuration is different on different router brands. (It’s probably called something like “Dynamic DNS” though). With all this done I can now watch the puppy cam from any computer or device by just entering the web address http://(censored).noip.me (and login+password) in a browser. Anyways, here are the steps for installing and configuring nginx:

  • sudo apt-get install nginx
    • (Auto)start nginx service:  service nginx start
  • sudo apt-get install lynx (terminal based browser for testing). Linux curl-command can also be used.
  • Testing that it works: lynx 127.0.0.1 – response: Welcome to nginx! (It works!)
  • Install apache utils to generate htpasswd files for authentication, sudo apt-get install apache2-utils
  • took a backup of /etc/nginx/sites-available/default –file. Then edited it:
    • changed root /usr/share/nginx/www; to root /home/xxx/www; (easier and more logical to edit and manage the webpage from /home).
    • created a htpasswd, sudo htpasswd –c /home/xxx/.htpasswd xxxxx
    • configured root dir on website to use htpasswd, under location / {
      • auth_basic “Restricted”;
      • auth_basic_user_file /home/xxx/.htpasswd;
  • The whole (tiny) configuration now looks like:

server {
        listen   80

        root /home/xxx/www;
        index index.html index.htm;

        location / {
                try_files $uri $uri/ /index.html;
                auth_basic “Restricted”;
             auth_basic_user_file /home/xxx/.htpasswd;

    }
}

and my fancy index.html file looks like:

<html>
<head>
<title>Welcome to xxxxxxxcam!</title>
</head>
<body bgcolor=”white” text=”black”>
<center><h1>This is the xxxxx webcam stream!</h1></center><br>
<center>
<h3><a href=”http://censored.noip.me:8080″>Firefox link</a></h3><br>
<h3><a href=”index2.html”>IE/Chrome/Mobile link</a></h3><br>
</center>
</body>
</html>

and in a screenshot:

nginx_index_page1

Fig 3. Main page (after I’ve entered login & password)

The Firefox-link links directly to the motion stream, as Firefox natively supports MJPEG. The IE/Chrome-link links to another webpage which uses java to display the mjpeg stream (see the chapter: Motion MJPEG “fix” for any browser). That page (Index2.html) looks like this:

<html>
<head>
<title>Welcome to xxxxxcam!</title>
</head>
<body bgcolor=”white” text=”black”>
<center><h1>This is the xxxxxx webcam stream!</h1></center><br>
<center>
<applet code=com.charliemouse.cambozola.Viewer
archive=cambozola.jar width=”640″ height=”480″ style=”border-width:1; border-color:gray; border-style:solid;”> <param name=url value=”http://censored.noip.me:8080″></applet&gt;
</center>
</body>
</html>

So basically what I’ve done is setup a password protected login page from which you can choose the method of displaying the stream.

 

Setting up a cron job for nginx:

Same principle as with motion, except:

35 8 * * * /etc/init.d/nginx start
35 15 * * * /etc/init.d/nginx stop

This will start nginx at 8.35AM and shut it down at 3.35PM (daily)

Useful nginx file locations:

/etc/nginx/sites-available and the default file
/etc/nginx and the nginx.conf file
/var/log/nginx and the error.log & access.log files

Starting and stopping the webserver:

service nginx start
service nginx stop

Sources:

http://www.ducky-pond.com/posts/2013/Sep/setup-a-web-server-on-rpi/
https://gist.github.com/mcfadden/7063035
http://nginx.org/en/docs/beginners_guide.html

 

Motion MJPEG “fix” for any browser

The problem is that Internet Explorer (and other browsers as well) doesn’t support multipart jpeg (MJPEG). There’s a fix available at:

http://www.lavrsen.dk/foswiki/bin/view/Motion/WebcamServer 

This assumes that you create a html page in which you include a bit of code. From the webpage:

The webserver generates a stream in “multipart jpeg” format (mjpeg). You cannot watch the stream with most browsers. Only certain versions of Netscape works. Mozilla and Firefox brosers can view the mjpeg stream but you often have to refresh the page once to get the streaming going. Internet Explorer cannot show the mjpeg stream. For public viewing this is not very useful. There exists a java applet called Cambozola which enabled any Java capable browser to show the stream. To enable the feature to a broad audience you should use this applet or similar.”

 

Securing NGINX with Fail2Ban

Well, I didn’t even have the server online for a long time before someone started probing/bombing for usernames and passwords (looking in the access and error logs). Sample from /var/log/nginx/error.log:

2014/06/09 15:38:12 [error] 4925#0: *24 user “manager” was not found in “/home/xxxx/.htpasswd”, client: 208.109.87.x, server: , request: “GET /manager/html HTTP/1.1”, host: “x.x.x.x”
2014/06/09 15:38:13 [error] 4925#0: *24 user “manager” was not found in “/home/xxxx/.htpasswd”, client: 208.109.87.x, server: , request: “GET /manager/html HTTP/1.1”, host: “x.x.x.x”
2014/06/09 15:38:14 [error] 4925#0: *24 user “user” was not found in “/home/xxxx/.htpasswd”, client: 208.109.87.x, server: , request: “GET /manager/html HTTP/1.1”, host: “x.x.x.x”
2014/06/09 15:38:16 [error] 4925#0: *24 user “user” was not found in “/home/xxxx/.htpasswd”, client: 208.109.87.x, server: , request: “GET /manager/html HTTP/1.1”, host: “x.x.x.x”

and from access.log:

208.109.87.x – manager [09/Jun/2014:15:38:12 +0300] “GET /manager/html HTTP/1.1” 401 194 “-” “-“
208.109.87.x – manager [09/Jun/2014:15:38:13 +0300] “GET /manager/html HTTP/1.1” 401 194 “-” “-“
208.109.87.x – user [09/Jun/2014:15:38:14 +0300] “GET /manager/html HTTP/1.1” 401 194 “-” “-“
208.109.87.x – user [09/Jun/2014:15:38:16 +0300] “GET /manager/html HTTP/1.1” 401 194 “-” “-“

Apparently “they” are trying to access /manager/html (Tomcat probing?) which doesn’t even exist on my site… oh well, this is not acceptable so I’ll have to block or ban these bastards. Of course I could try using https with certificates instead of http but it’s a bit overkill for this little server/hobby project 🙂

I started with a DDOS attack filter, info here: https://rtcamp.com/tutorials/nginx/fail2ban/

I then followed http://snippets.aktagon.com/snippets/554-how-to-secure-an-nginx-server-with-fail2ban to:

  • Block anyone trying to run scripts (.pl, .cgi, .exe, etc)
  • Block anyone trying to use the server as a proxy
  • Block anyone failing to authenticate using nginx basic authentication
  • Block anyone failing to authenticate using our application’s log in page
  • Block bad bots
  • Limit the number of connections per session

After this was done I ran:

root@xxx:/home/xxx# tail /var/log/fail2ban.log

2014-06-10 10:21:04,342 fail2ban.jail   : INFO   Jail ‘ssh’ started
2014-06-10 10:21:04,516 fail2ban.jail   : INFO   Jail ‘nginx-req-limit’ started
2014-06-10 10:21:04,618 fail2ban.jail   : INFO   Jail ‘nginx-auth’ started
2014-06-10 10:21:04,837 fail2ban.jail   : INFO   Jail ‘nginx-login’ started
2014-06-10 10:21:04,964 fail2ban.jail   : INFO   Jail ‘nginx-badbots’ started
2014-06-10 10:21:05,100 fail2ban.jail   : INFO   Jail ‘nginx-noscript’ started
2014-06-10 10:21:05,227 fail2ban.jail   : INFO   Jail ‘nginx-proxy’ started

(iptables –L now also lists a longer list with all these new fail2ban-rules. Won’t paste here as it’s a bit long…)

Luckily I did apply these filters, because the next day I got bombed by a ZmEu attack. Information about ZmEu:

http://ensourced.wordpress.com/2011/02/25/zmeu-attacks-some-basic-forensic/
http://support.scalr.net/discussions/questions/1841-should-i-be-worried-about-w00tw00tatblackhatsromaniananti-sec
http://stackoverflow.com/questions/13897993/am-i-being-hacked

Probably nothing to worry about as ISP’s are doing their own penetration testing all the time. Fail2Ban blocked it however (fail2ban.log):

2014-06-11 13:33:34,301 fail2ban.actions: WARNING [nginx-noscript] Ban 89.248.160.x
2014-06-11 13:43:34,409 fail2ban.actions: WARNING [nginx-noscript] Unban 89.248.160.x

 

With all this done, I now feel rather safe. After all, this is not a production server in Redmond 🙂

(If I do feel like experimenting with more security one day, I’ll compile my own Nginx with ModSecurity.  (http://www.modsecurity.org/projects/modsecurity/nginx/))

And there you have it – a rather nice and secure puppy cam. Enjoy! 🙂

 

Update: Version 2.0 of the Puppy Cam available here

Adding PXELINUX to WDS (Mixed-mode Windows/Linux Deployment)

I’ve been using WDS + MDT for a long time and I’ve been happy. This didn’t stop me from thinking about adding more PXE alternatives than the standard (Windows) WDS PXE environment though. I like to experiment, and this time I experimented with a mixed-mode of Windows and Linux PXE. What you’ll need in short is PXELINUX to replace your out-of-the box WDS pxe-boot environment. Your old familiar Windows Deployment Services will still be left intact, it will just be added as a separate PXE boot option in the boot menu. I found some pretty good instructions, but as usual I like to write my own. I won’t write about the whole experiment though, as the main guide is available from:

http://thommck.wordpress.com/2011/09/09/deep-dive-combining-windows-deployment-services-pxelinux-for-the-ultimate-network-boot/

I followed the guide, with the following changes:

  • Could not find all (three) files in any of the syslinux-packages I tried downloading from kernel.org. My solution was to (yum) install this package on one of my test-workstations and just copy the files from there. It happened to be a Fedora Core 19 if someone is interested.
  • Copied the files to \\RemoteInstall instead of \\Reminst (I’m using Windows Server 2008 R2, to be upgraded to 2012 R2 soon).
  • Copied the files to BOTH x64 and x86 directories on the WDS server. My virtual machine didn’t like booting from x86 even though my newly created virtual test machine was supposed to be 32 bit. Oh well, with the files copied for both architectures it did work. I did my changes in the x64 directory however.
  • Because of this, I also had to change the flag in the wdsutil command:
    • wdsutil /set-server /bootprogram:boot\x64\pxelinux.com /architecture:x64
    • wdsutil /set-server /N12bootprogram:boot\x64\pxelinux.com /architecture:x64
  • Added Gparted Live, Memtest86+ and (later) Ubuntu to the menu
  • Did a test-run and it booted 🙂 (Fig 1)

pxe_boot_menu

Fig 1. PXE Boot Menu (Pxelinux/Syslinux). Better screenshot later in the document (Fig 10).

memtest

Fig 2. Memtest86+

  • Configured GParted Live a bit different than in the guide. I had to scratch my head a bit for the fetch=http part but got it working (UPDATE: now using NFS instead, see note).
    • Didn’t quite understand the webserver part (and I’m not an expert on IIS). Had a look at http://www.syslinux.org/wiki/index.php/WDSLINUX which gave me some small hints… “If on IIS create a new virtual directory and set the mime type for .* extension to text/plain)”. Still very cryptic to me though 🙂
    • Well, doesn’t hurt to try. I installed the IIS Server Role on the WDS Server.
    • Went to properties and created a new virtual directory (Fig 3). The alias was set to “gparted” and the physical path to D:\RemoteInstall\Boot\x64\Linux\gparted (where all the files for gparted are stored).

iis

Fig 3. IIS Manager

  • Created the mime type for .* (Fig 4).

iis2

Fig 4. Setting mime type for .*

  • Also changed the configuration file in syslinux/pxelinux to point to this virtual directory:
    • append initrd=\Linux\gparted\initrd.img boot=live config  noswap noprompt  nosplash  fetch=http://x.x.x.x/gparted/filesystem.squashfs
    • Actually worked 🙂 (Fig 5)

gparted

Fig 5. GParted booted in an “empty” 5GB virtual machine with no partitions.

 

NOTE: Not using http/IIS for GParted anymore. Also got this working with NFS. Read the next chapter about Ubuntu and you’ll understand. I just copy/paste the working configuration here:

LABEL gparted
    MENU LABEL GParted Live
    kernel \Linux\gparted\vmlinuz
    append initrd=\Linux\gparted\initrd.img boot=live config noswap noprompt nosplash netboot=nfs nfsroot=x.x.x.x:/Linux/GParted
    # append initrd=\Linux\gparted\initrd.img boot=live config  noswap noprompt  nosplash  fetch=http://x.x.x.x/gparted/filesystem.squashfs (not in use now)

Other changes from http to NFS: I had to make a directory called “live” and copy/move the filesystem.squashfs into that dir. Source: http://gparted-forum.surf4.info/viewtopic.php?id=14165

 

Moving along to Ubuntu…

I found many guides, but they were all about the same. None with working configurations for me 😦 I had to try almost everything with trial and error. I’m still not even sure if the problem was a bad parameter or a wrongly configured NFS Server. Oh well, to help anyone else out there, here are my steps (in no particular order):

  • Got a headache from all the testing 🙂
  • Downloaded Ubuntu Netboot Image from http://cdimage.ubuntu.com/netboot/
  • Downloaded an Ubuntu ISO, 13.10 (64-bit) in my case
  • Extracted both the Netboot Image and the Ubuntu ISO and copied them to the WDS Server:
    • D:\RemoteInstall\Boot\x64\Linux\Ubuntu\ubuntu-installer (Netboot version)
    • D:\RemoteInstall\Boot\x64\Linux\Ubuntu\ubuntu-full (Extracted Full version ISO)
  • Add the Role Services: Services for Network File System to your WDS Server (Fig 6).

nfs_server2008r2

Fig 6. Services for Network File System (NFS)

  • Configured NFS. My steps in the screenshot (Fig 7):

nfs_sharing_steps

Fig 7. NFS-Sharing

  1. Manage NFS Sharing… (Choose Properties/NFS Sharing on the dir. I shared D:\RemoteInstall\Boot\x64\Linux)
  2. NFS Advanced Sharing (changed settings according to picture)
  3. Permissions (put a mark in Allow root access)

 

I got very confused with all the parameters from all the instructions found. I seems that the vmlinuz-file isn’t the same in Ubuntu 13.10 as in older distros. Correct me if I’m wrong. It took me a long time to figure out the correct settings for the configuration file “default” (\Boot\x64\pxelinux.cfg\default).

After some serious testing, it turned out that I had to use a combination of both the Netboot version and the full version of Ubuntu to get my Live-CD to PXE boot.

Here’s a sample from http://www.howtogeek.com/61263/how-to-network-boot-pxe-the-ubuntu-livecd/ which didn’t work out of the box for me:

LABEL Ubuntu Livecd 11.04
MENU DEFAULT
KERNEL howtogeek/linux/ubuntu/11.04/casper/vmlinuz
APPEND root=/dev/nfs boot=casper netboot=nfs nfsroot=<YOUR-SERVER-IP>:/tftpboot/howtogeek/linux/ubuntu/11.04 initrd=howtogeek/linux/ubuntu/11.04/casper/initrd.lz quiet splash --

In this example, the Ubuntu files are extracted to \Linux\Ubuntu\11.04. On my server that would correspond to the exact path of D:\RemoteInstall\Boot\x64\Linux\Ubuntu\11.04. Then there are a bunch of sub-directories, one of which is namned “casper”. Casper holds the kernel so Ubuntu can start/boot over network/pxe. HOWEVER, on Ubuntu 13.10, there’s NO file called vmlinuz, it’s called vmlinuz.efi. Afaik, pxelinux can’t boot this file (at least not on my test setup. Don’t know about the EFI-support on pxelinux either…). Well, the solution was to use the netboot version for the kernel and the full distro (extracted) for the actual installation. It’s probably easiest to just post my current working configuration:

LABEL Ubuntu
    menu label Ubuntu 13.10, 64bit
    kernel \linux\ubuntu\ubuntu-installer\amd64\linux
    append root=/dev/nfs boot=casper netboot=nfs nfsroot=x.x.x.x:/Linux/Ubuntu/ubuntu-full initrd=/Linux/Ubuntu/ubuntu-full/casper/initrd.lz quiet splash

I have extracted the Ubuntu netboot version to D:\RemoteInstall\Boot\x64\Linux\Ubuntu\ubuntu-installer and the full version Ubuntu to D:\RemoteInstall\Boot\x64\Linux\Ubuntu\ubuntu-full. I’m sharing the “Linux” dir over NFS as described earlier.

All of this configuration was just for a plain installation of Ubuntu 13.10. There’s much that can/should be automated with kickstart or similar. There are some packages that should be installed in the post-phase of the installation and also some configuration files that should be copied (from the nfs share). I’ll leave this for another post.

 Ubuntu_install1

Fig 8. Ubuntu pxe-booted “live”.

Ubuntu_install2

Fig 9. Installing Ubuntu from the live session

 

My configuration file if someone is interested:

DEFAULT      vesamenu.c32
PROMPT       0

MENU TITLE Abo Akademi, Dept. of IT PXE Boot Menu
MENU INCLUDE pxelinux.cfg/graphics.conf
MENU AUTOBOOT Starting Windows Deployment Services in 8 seconds…
         
# Option 1 – Run WDS
LABEL wds
          MENU LABEL Windows Deployment Services
         menu default
         timeout 80
         TOTALTIMEOUT 9000
          KERNEL pxeboot.0

        
# Option 2 – Run gparted
LABEL gparted
    MENU LABEL GParted Live
    kernel \Linux\gparted\vmlinuz
    append initrd=\Linux\gparted\initrd.img boot=live config noswap noprompt nosplash netboot=nfs nfsroot=x.x.x.x:/Linux/GParted
    # append initrd=\Linux\gparted\initrd.img boot=live config  noswap noprompt  nosplash  fetch=http://x.x.x.x/gparted/filesystem.squashfs (not in use anymore)
   

# Option 3 – Run memtest86+
LABEL memtest86+
    menu label Memtest86+
    kernel \Linux\memtest\memtest

   
# Option 4 – Ubuntu
LABEL Ubuntu
    menu label Ubuntu 13.10, 64bit
    kernel \linux\ubuntu\ubuntu-installer\amd64\linux
    append root=/dev/nfs boot=casper netboot=nfs nfsroot=x.x.x.x:/Linux/Ubuntu/ubuntu-full initrd=/Linux/Ubuntu/ubuntu-full/casper/initrd.lz quiet splash

   
# Option 5 – Exit PXE Linux
LABEL Abort
         MENU LABEL Exit
         KERNEL abortpxe.0

 

I left graphics.conf with its default settings (Fig 10, modified in Fig 1). Here is a screenshot of the whole thing in action:

pxe_boot_menu_final

Fig 10. PXELinux PXE Boot Menu

I already have screenshots from the other alternatives on the boot menu, so just for fun I added some screenshots when I’ve pressed the default option to boot Windows Deployment Services:

wds_loading_files_pxelinux1

Fig 11. Loading boot file from WDS.

pxelinux_wds_mdt_task_sequences

Fig 12. After the boot file has loaded from WDS, it’s time for MDT’s Task Sequences. Everything about this procedure is already covered in my previous post Deploying Windows 7/8 with Microsoft Deployment Toolkit (MDT) 2012 Update 1 and Windows Deployment Services (WDS)

 

Are you still reading? Instead go ahead and try PXE booting every OS out there 😉

 

Sources (those that I can remember):

http://www.syslinux.org/wiki/index.php/WDSLINUX
http://www.howtogeek.com/61263/how-to-network-boot-pxe-the-ubuntu-livecd/
http://s205blog.wordpress.com/2012/10/02/ubuntu-12-04-lte-pxe-network-installation-tutorial/
http://rebholz.wordpress.com/2012/05/17/technical-how-to-installing-linux-edubuntu-using-pxe-boot-and-windows-7-as-a-server/
http://www.howtogeek.com/162809/how-to-pxe-boot-an-ubuntu-image-from-windows-server-2008/

Joining Ubuntu 13.04 to Windows Domain

Apart from Windows, our University is supporting Fedora on workstations and CentOS on servers. Everybody is not happy with Fedora however and Ubuntu has become very popular during the last few years. Ubuntu isn’t supported in the same way as Fedora, which (for us) means that there’s only local users/authentication after a successful installation.

We need another way to authenticate and joining the computer to the Windows Active Directory Domain is an alternative. I did some research and LikewiseOpen seemed like the easiest way of accomplishing this.

“Likewise Open provides a complete authentication solution allowing *nix systems to be fully integrated into Active Directory environments. Created by Likewise Software to make Linux and Unix systems first class citizens on Windows networks, likewise-open will authenticate both Ubuntu Desktop Edition and Ubuntu Server Edition machines.”

Source: https://help.ubuntu.com/community/LikewiseOpen

 

My steps for joining an Ubuntu 13.04 machine to the Windows Domain / Active Directory:

sudo pico /etc/hostname , change it so it corresponds with the computers registered dns name

Install LikewiseOpen:

sudo apt-get install likewise-open likewise-open-gui (source: likewise documentation)

Join the domain:

sudo domainjoin-gui (cmd version wouldn’t work for me). Leave the domain with the same command

likewise

Fig 1. Joining the Domain

To get domain login options to the Ubuntu login screen (info for both 13.04 and 13.10):

for Ubuntu 13.04: sudo sh -c ‘echo “greeter-show-manual-login=true” >> /etc/lightdm/lightdm.conf’

for Ubuntu 13.10: sudo pico /etc/lightdm/lightdm.conf.d/10-ubuntu.conf

[SeatDefaults]
user-session=ubuntu
# to disable guest login
allow-guest=false
# to enable user login manually
greeter-show-manual-login=true

Sources:

http://askubuntu.com/questions/210712/ubuntu-12-10-likewise-and-logging-in-to-the-domain

http://askubuntu.com/questions/62564/how-do-i-disable-the-guest-session

 

By default you have to login to the domain with your user credentials in the form domain\username.

To skip this and login with only username:

sudo lwconfig assumeDefaultDomain true

Source: http://www.youtube.com/watch?v=sVT-0t4d48I

I had some problems finding the above command as the old trick will NOT work with Ubuntu 10 and newer versions.

(Old: sudo pico /etc/samba/lwlauthd.conf

winbind use default domain = yes)

 

Additional (optional) configuration and comments:

Put yourself as sudoer:

sudo pico /etc/sudoers

Install OpenSSH server:

sudo apt-get install openssh-server

edit /etc/hosts.allow & /etc/host.deny according to your needs.

 

Checking likewise configuration after successful domain join:

cat /etc/krb5.conf

Checking likewise version:

dpkg-query -W likewise-open

 

Printers:

My installed Windows domain printers seemed to work just fine in Ubuntu also. I only had to do some small changes to page size and page type.

Windows Updates in Nagios

We’re currently using Nagios as our main monitoring system at the Department. There’s actually no need to change that (even though I tried SCOM 2012). Things that I’ve been missing in our current Nagios setup are notifications about Windows Updates. Well, honestly I haven’t even looked into that specific “problem” before now. That said, I decided to give it a try.

I started by doing some googling and found a nice solution which uses  NSClient++ (http://www.nsclient.org/nscp/) and a script (http://zeldor.biz/2012/02/icinganagios-check-windows-updates/) which checks for updates. I followed the steps with some minor changes:

  • Installed NSClient++ on the Windows Server(s)
  • Edited nsclient.ini (NSC.ini is for older versions):

[/modules]
NRPEServer = 1

[/settings/NRPE/server]
port=5667 (default port wouldn’t work for some reason)
command_timeout=90
allow_arguments=0
use_ssl=1
socket_timeout=90

[/settings/external scripts/scripts]
check_win_updates=cscript.exe //T:90 //NoLogo scripts\\check_windows_updates.wsf /w:1 /c:10

[/settings/default]
; ALLOWED HOSTS – A comaseparated list of allowed hosts. You can use netmasks (/ syntax) or * to create ranges.
allowed hosts = abcdef.abo.fi

     

  • On server side:

commands.cfg:

define command {
    command_name    check_win_updates
    command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -p 5667 -c check_win_updates -t 120
}

services.cfg:

define service {
    hostgroup_name               check-win-updates
    service_description            Windows Updates
    check_command               check_win_updates
    use                                     generic-service
    check_interval                  2880
}

host-groups.cfg:

define hostgroup {
    hostgroup_name          check-win-updates
    alias                           Windows Updates
    members                 server1,server2 (Lets call the servers server1 and server2 in this example)
}

 

Play around with the timers ( –t) and see what suit your needs. If you want to check other stuff as well (harddisk space, cpu usage and so on) you have to configure a bit more.

Examples:
http://nagios.sourceforge.net/docs/3_0/monitoring-windows.html
http://awaseroot.wordpress.com/2012/11/23/monitoring-windows-with-nagios/

In our case it was enough with Windows Updates checking however.

Here’s a screenshot (Fig 1) from nagstamon running on my Windows 8 client:

nagastamon

Fig 1. nagstamon

There’s currently no information about Windows Updates as all our servers were already updated before the screenshot 🙂 The other information is from different linux servers and printers. Here’s another screenshot (fig 2) from the Nagios web interface:

winupdates

Fig 2. Host information/status detail for host in Nagios

Here you can see that the Windows Update check is running and that no updates are waiting or installing.

So, there you have it – Windows Updates in Nagios.

Deploying Windows 7/8 with Microsoft Deployment Toolkit (MDT) 2012 Update 1 and Windows Deployment Services (WDS)

This document is a bit dated, I wrote it back in November 2012 (with some small updates later on).

 

 

Lab environment

 

I started out in a lab environment and moved over to production environment when everything was working as expected. My testing environment was (is) VMware Workstation.

I have to say that all the guides I found on the Internet were a bit confusing, but I finally got it working the way it should. I’ll try to recap my steps, and hopefully it won’t be as confusing for others trying to build a similar environment.

 

I basically followed these steps:

 

· Installed Windows Server 2008 R2 Datacenter in a Virtual Machine.

· Configured the Virtual Machine:

o   Network as host-only with a static IP-address.

o   Added a second virtual hard drive. It’s best practice to have the deployment share on a different drive/partition.

· Installed  the necessary software:

o   .NET Framework 3.5 from Server Manager, Features

o   Windows Automated Installation Kit (AIK) v. 3.0 (Update: please use Windows ADK)

o   Microsoft Deployment Toolkit (MDT) 2012 Update 1

· Installed necessary Server Roles for WDS:

o   Active Directory Domain Services Server Role

o   DNS Server Role (configuration documentation not included for lab environment)

o   DHCP Server Role (configuration documentation not included for lab environment)

· Copied a plain Windows 7 Enterprise 64-bit image to the server

· Copied  our production .wim-image to the server (also Windows 7 Enterprise 64-bit)

 

 

MDT

 

Now the server was ready for configuring the most important part, Microsoft Deployment Toolkit (MDT) 2012 Update 1. As I said before, many guides are available on the Internet but they can be confusing. One guide that helped me was:

http://www.vkernel.ro/blog/deploying-windows-7-with-wds-and-mdt-2010-part1

Thanks to the author for this one. Kept me going without giving up Smile

Anyways, I’ll try to recap my steps:

 

· Created a new Deployment share, D:\DeploymentShare$ in my case.

o   Disabled every step in options (wizard panes)

· You’ll end up with a very basic vanilla Deployment Share. This has to be heavily customized for your own environment.

· Add Operating System(s) either from Source (DVD) or from an image file (.wim). There are a couple of questions to answer during the OS import, but they can be googled if not self-explanatory.

 

clip_image002

Fig 1. Adding Operating Systems in MDT 2012

 

· Above is a screenshot with two Operating Systems added. This is enough for my deployment. I used an old domain-image, which I installed in a virtual machine. I updated all programs and added some new ones. I then sysprep’ed the virual machine and made an image with ImageX. (Took a snapshot before this so it’s easy to revert). You can use other techniques to sysprep and capture (MDT’s own Task Sequence for example), but I used imageX because I’ve done it before. You now have your “Golden Image”, which can be deployed straight away or modified by adding Applications or injecting drivers etc.

· Much of the important settings are available when you right click the deployment share and choose properties. Fig 2. shows a screenshot of the default rules for the deployment share. Much can (and should) be changed. I’m not going through every setting here as you can find help online, for example:

http://scriptimus.wordpress.com/2011/05/06/mdt-2010-skipping-deployment-wizard-pages/

 

clip_image004

                Fig 2. Default Rules for the Deployment Share. 

 

Screenshots are better than text, so here are my rules after modifications. Almost all dialogs are bypassed, except machine name and domain. I also configured logging, as it’s nice to know if something went wrong (SLShare=\\WDS\Logs)

 

 clip_image006

                 Fig 3. CustomSettings.ini (Rules)

 

 

WDS

 

Time to move along to the wds-part. I’ve already installed the wds server role so now it’s time to configure it.

 

· Start wds, right-click your server and choose configure server.

 

· The instructions will tell you to add the default images (Install.wim and Boot.wim) that are included in the Windows 7 installation DVD (in the \Sources folder). This is where it gets a bit confusing (at least for me). DO NOT add the install image, JUST the boot image. This way, you just boot from the wds-server, and can point the installation to use an install image from your mdt share.

 

· Go back to MDT and choose properties on your Deployment Share. Go to the Rules tab. Click Edit Bootstrap.ini, down in the right corner. Edit the file according to your environment. Here’s a screenshot of my customized file:  

 

            clip_image008

 

    Fig 4. Bootstrap.ini                     

 

· Every time you change a setting in Rules or Bootstrap.ini in MDT, you’ll have to UPDATE THE DEPLOYMENT SHARE (right click deployment share). This wasn’t that well documented.

Also, if you make changes to the Boot Image configuration (Bootstrap.ini), you will HAVE TO REPLACE the Lite Touch Windows PE (x64) boot image in WDS (right-click the current boot image and choose replace) after you have updated the deployment share. Otherwise wds will boot with the old boot image. Choose the file from your Deplyment Share\Boot\ LiteTouchPE_x86.wim. 

         clip_image010

            Fig 5. WDS

 

 

 

Back to MDT – Task Sequences

 

Anyways, back to MDT. Now it’s time to make some Task Sequences which basically tells MDT what to do before, during and after Deployment. This is where the magic happens.  

 

clip_image012

 Fig 6. MDT, Task Sequences.

 

· Right click Task Sequences, choose New Task Sequence

· Give it an ID, Name and optionally a comment

· Choose Standard Client Task Sequence (I won’t look into the other options in this document, though I will probably test them further on)

· Choose your desired Image (Operating System)

· Fill in the other information to suit your needs

· Do not specify an Administrator Password at this time

· Right click or double-click to configure your newly created Task Sequence

 

Have a look at all the default options from your newly created Task Sequence. Modify and test-deploy to look at different options. Google and learn. I won’t go into details of all of the options as it would take forever. Information is available online, just use it.

 

I haven’t modified that much as my current image has most of the important settings already. I had a look at the partitioning (Preinstall/Format and Partition Disk) and changed the volume label. 100% disk use was good for me, so I didn’t change that. It’s easy to change it later according to your needs.

 

I have a custom script that configures MDT to allow the graphics driver auto detect method to set the screen resolution. Thanks to Johan Arwidmark for this script. Won’t paste the code here as it’s a bit too long…

(Source: http://www.deploymentresearch.com/Blog/tabid/62/EntryId/70/Going-Production-Deploy-Windows-8-using-MDT-2012-Update-1.aspx )

 

I also have a custom script that renames and disables the local Administrator account. It runs last in the “State Restore” process of the deployment. It’s added via Add/General/Run Command Line and moved to the correct place in the sequence. It runs a command line “cscript.exe “%SCRIPTROOT%\DisableAdmin.vbs” which basically runs a custom script from the default “Scripts” dir. Included in this script is the following information:

 

strComputername = “.”

Set objUser = GetObject(“WinNT://” & strComputername& “/Administrator”)

 

 objUser.SetPassword “thePasswordFromCustomSettings.ini”

 objUser.AccountDisabled = True

 objUser.SetInfo

 

 Set objWMIService = GetObject(“winmgmts:\\” & strComputerName & “\root\cimv2”)

 

 Set colAccounts = objWMIService.ExecQuery (“Select * From Win32_UserAccount Where LocalAccount = True And Name = ‘Administrator'”)

 

 For Each objAccount in colAccounts

     objAccount.Rename “OldLocalAdm”

 Next

 

(Source: http://social.technet.microsoft.com/Forums/en-US/itprovistadeployment/thread/87b61d5e-7085-465d-a2f0-5b5d131c6670#933ec6db-87ff-4b55-8f85-b190880f8e17 )

 

 

Deployment

 

Now it’s time to test the deployment process. You should already have configured wds with a boot image so that the clients can boot from it. You should also have specified the correct settings in Bootstrap.ini so that the Deployment Share (images) can be found from wds.

 

· Make an “empty” virtual machine

· Configure it to pxe-boot

· Start it

· Press F12 to boot from the network

· Your WDS-server is found

· Start Deployment and follow on-screen instructions 

 

clip_image014

Fig 7. Actual Deployment process/progress

 

 

 

 

Production environment

 

The setup is obliviously different in the production environment. The wds-server is on our internal network, but has access to the public network (AD) via NAT. I’ll start with a picture of the whole setup to give you an idea of the configuration. 

 

image

                                                                                                             Fig 8. Production Setup

 

Basically what we have here is a linux computer that is used to NAT/IP masquerade the traffic to the internal network. On the internal part we have a different linux dhcp-server that gives out leases to all of our internal clients. Three different subnets are configured, but the .17.x is used for our wds-server. The linux dhcp server will have to be configured to understand to boot from the windows wds-server. More on that later on.

The steps for installation are basically the same as for the lab environment, except for the dhcp-server and (no) AD. Here’s a list:

 

· Installed Windows Server 2008 R2 Datacenter (in a Virtual Machine on a VMware ESXi 3.5 server)

· Configured the server:

o   Network with static IP-address.

o   Added a second (virtual) hard drive. It’s best practice to have the deployment share on a different drive/partition.

· Joined the server named “wds” to the production domain

· Installed  the necessary software:

o   .NET Framework 3.5 from Server Manager, Features

o   Windows Automated Installation Kit (AIK) v. 3.0

o   Microsoft Deployment Toolkit (MDT) 2012 Update 1 

· Installed necessary Server Roles for WDS:

o   WDS Server Role

o   DNS Server Role (not actually used, more on the configuration later on)

o   Didn’t install DHCP Server Role, as I’m using the existing linux dhcp server  (more on the configuration in next chapter)

· Copied a plain Windows 7 Enterprise 64-bit image to the server

· Copied  our production .wim-image to the server (also Windows 7 Enterprise 64-bit)

 

The steps for MDT are exactly the same as in the Lab environment. Same goes for WDS, except that I configured the server to boot from the production share. Some small changes in CustomSettings.ini (Rules) are made, for example domain and username/password.

 

 

Linux DHCP

 

As I said before, I decided to use our existing linux dhcp-server for pxe-booting. For this to work, I added the following to /etc/dhcp3/dhcpd.conf :

 

subnet 192.168.17.0 netmask 255.255.255.0 {

        range 192.168.17.10 192.168.17.250;

        option domain-name-servers 130.232.213.x;

        # option domain-name-servers 192.168.16.200;

        option routers 192.168.17.254;

        next-server 192.168.16.200;

        option tftp-server-name “192.168.16.200”;

        option bootfile-name “boot\\x86\\wdsnbp.com00”;

 

and restarted the dhcp-server, /etc/init.d/dhcp3-server restart.

 

(Source: http://tspycher.com/2011/03/booting-into-wds-windows-deployment-service-from-linux-dhcpd/)

 

Now the test client booted nicely. Here’s a screenshot:

 

clip_image018

Fig 9. PXE-booting from wds.

 

All wasn’t that good though. My Deployment Share wasn’t accessible due to dns errors. I got “A connection to the deployment share (\\WDS\DeploymentShare) could not be made”.

I pressed F8 to get into console mode and to do some error checking. I could ping my wds server via IP-address so the problem was dns. A quick configuration check on the linux dhcp server revealed the problem, my dhcpd.conf had the dns option:

domain-name-servers 130.232.x.x; (external).

I changed this to our own internal dns server (192.168.16.200).This dns server was also configured with forwarders to our external network (130.232.x.x.) so name resolution works for both internal and external hosts. Good idea in theory, not in practice. Here’s a screenshot of DNS on the wds server.

 

clip_image020

Fig 10. Windows DNS Manager on wds-server

 

WindowsPE still can’t access \\wds via short name. Somehow I get the external dns-suffixes even though I have configured the hosts to use the internal dns server (and suffixes) in dhcp.conf. 

Also, option domain-search “intra.abo.fi”, “xxx.fi”, “xxx.fi”; in dhcpd.conf gives me errors and I have no idea why Sad smile

 

root@iloinen:/etc# /etc/init.d/dhcp3-server restart

dhcpd self-test failed. Please fix the config file.

The error was:

WARNING: Host declarations are global.  They are not limited to the scope you declared them in.

 

Well I tried declaring them globally also… still no luck.

 

/etc/dhcp3/dhcpd-iloinen.conf line 167: unknown option dhcp.domain-search

option domain-search “intra.abo.fi”

Configuration file errors encountered — exiting

 

I finally gave up with dns names and used IP addresses instead. It’s not the prettiest solution, but at least it’s working. Clients are now contacting \\192.168.16.200\DeploymentShare instead of \\WDS\DeploymentShare. Success, finally Smile

 

Note to self: If a computer exists in AD, it won’t join the domain during deployment. From logs:

NetSetup.LOG:

12/14/2012 09:34:46:923 NetpModifyComputerObjectInDs: Computer Object already exists in OU:

 

There is probably an easy workaround for this, but for me the easiest way was to remove the computer from AD before deployment.

 

My image is now finally deployed to a (physical) test computer. Success! Smile Further enhancements/tweaks can of course be done, and I’m writing about a few of them now. Total time for deployment (12GB compressed image) was about 30minutes over 1Gbit LAN.

 

 

Adding Applications

 

One thing you probably want to do is add different applications to your image after/during deployment. It’s quite easy (at least for basic applications), and the thing you need are the switches for silent install and so on. I tried adding Adobe Acrobat Reader 11 to my deployment, and the installation went fine during installation. I followed a guide from:

 

http://www.itninja.com/question/hi-i-have-to-install-acrobat-reader-x1-silently-i-am-not-given-adberdr11000-en-us-exe-file-instead-i-am-given-setup-msi-ini-files-i-want-to-run-a-customization-wizard-to-create-an-mst-file-i-don-t-know-how-to-do-it-please-help

 

and as the forum post says, the “AdbeRdr11000_en_US.exe /sPB /rs”  also worked for me. I guess the installation of different programs is about the same, so I won’t try any other at the moment. Time will tell what I need.

 

 

Adding Drivers

 

One more thing you probably want to customize is different drivers. You can add/inject out-of-box drivers from different vendors. This is very useful, as you can have different setups for workstations and laptops and so on. Update: I suggest that you have a look at selection profiles (or similar) before you mess around with other driver options:

 

http://www.deployvista.com/Default.aspx?tabid=78&EntryID=132

 

Our regular workstations (Osborne Core 2’s, a bit on the older side) works fine without (almost) any additional drivers, but I’ll add the missing ones with a trick learned from a video.

Video: http://channel9.msdn.com/Events/TechDays/Tekniset-Esitystallenteet/TechNet-2011-Windows-7-k-ytt-notto-osa-2

 

Laptops (Lenovo)

 

Our Department uses Lenovo Thinkpad laptops, which uses various drivers. I will test to inject a couple of these. Lenovo have made an (excellent) administrator tools which will help you with the drivers. Instead of injecting (and downloading) a driver one by one, you can use programs that will do all of this automatically. Well, semi-automatically anyways. They’re called ThinkVantage Update Retriever and ThinInstaller. Google “thinkvantage update retriever mdt” and you will find a word document with instructions.

 

Here are my steps:

 

· Downloaded Lenovo Update Retriever 5.00 and installed it on the wds/mdt server

· Downloaded Lenovo ThinInstaller 1.2 and installed it on the wds/mdt server

· Did not completely follow the instructions in the document for setup instructions.

o   It was suggested to add drivers to Out-of-Box Drivers section. If you/I did this, drivers were added to the boot image which made it grow to a huge size. I only need LAN (and possibly HDD-drivers) for the boot image. In my case, I didn’t need either because WinPE found my HDD and LAN card without additional Out-of-Box Drivers.

· Skipped to Working with ThinInstaller-step of the guide

· Followed guide, and added a step (after restart-step in Postinstall section) in my task sequence for copying ThinInstaller files from server to c:\thin on the clients.

· Next step is to create a command after the previous step that actually runs the ThinInstaller and installs all the necessary software and drivers on the client.

The command used here is:

C:\Thin\ThinInstaller.exe /CM -search A -action INSTALL -noicon -includerebootpackages 1,3,4 –noreboot

· Run a test-Deployment on our Departments Lenovo T500

· Various results, didn’t work that great actually. Too many details to go through here.

· Ended up with plan B, which was installing Lenovo’s System Update via MDT’s “Applications”. Again, not the prettiest solution but at least you have the option of installing this software and it doesn’t take that long to install missing drivers/software afterwards.

Our main installation scenario is workstations anyway, so I’ll put my energy on other fields of the deployment process.

 

 

Workstations (Osborne)

 

Nothing special about this, same procedure as with laptops except different Task Sequence without the Lenovo-stuff.

 

· Installed our production image

· Installed missing drivers via Windows Update after deployment completed

· Copied the drivers that were installed via Windows Update (using the trick from the video described earlier)

o   From: Clients C:\Windows\system32\DriverStore\FileRepository\

o   To: mdt-server

o   Drivers with newer date than 28.11.2012 (dates after my image making/sysprepping)

· Injected the drivers into MDT

· Drivers will be used in next deployment. Tadaa 🙂

· Update: now using selection profiles instead

 

Now that all of the “new computer” installations are working the way I want, I decided to go ahead and try refresh and replace installations. This is handy if you get a new computer and want to save the data from your old computer for example.

 

 

Refresh installation

 

I decided to try a refresh installation so I would know what it does. I didn’t do this on physical hardware, just in my lab environment.

 

“Basically you need to launch the deployment wizard from the OS you’re about to replace.

There are a variety of ways to do this but I usually browse to my deployment point on the network and run the BDD_Autorun.wsf within the scripts folder (an example is \\<server>\distribution$\Scripts\BDD_Autorun.wsf).

It will give you the option to either Refresh or Upgrade this computer, choose refresh, finish the wizard stuff and you should be good to go.”

 

Source: http://social.technet.microsoft.com/forums/en-US/itprovistadeployment/thread/57629548-ad95-4da6-a85c-ec3d9fe0e33a/

 

I ran BDD_Autorun.wsf and sat back to watch the magic. The result was a “refreshed” computer, just the way I left it before the refresh including all my documents and all extra folders I had created.

 

 

Replace installation

 

I decided to try out the replace installation as well. This is more likely to come in handy when new computers arrive at the Department and we want to save all the data from the old one.

Here’s some information copy/pasted from Andrew Barnes’s scripting and deployment Blog.

 

An existing computer on the network is being replaced with a new computer. The user state migration data is transferred from the existing computer to share then back to the new computer. Within MDT this means running 2 task sequences, Replace Client Task Sequence then a task based on the Standard Client Task Sequence template. The Replace Task Sequence will only back up your data and wipe the disk in preparation for disposal/reuse.

 

  • Task Sequence deployment from within Operating System or Bare Metal
  • Task Sequence run on Source machine captures user state
  • New machine begins using PXE boot or boot image media
  • User state must be stored on a share or state migration point
  • User state and compatible applications re-applied on new machine

 

Source: http://scriptimus.wordpress.com/2011/06/28/mdt2010-deployment-scenarios/

 

clip_image022

Pic source: http://blogs.technet.com/b/chrad/archive/2012/07/26/learning-mdt-2012-s-user-driven-installation-udi.aspx

 

·         I created a new Standard Client Replace Task Sequence on the wds server.

·         I run BDD_Autorun.wsf (from \\wds-server\DeploymentShare$) from the computer that would be replaced, which launches the Windows Deployment Wizard.

·         I chose my newly created Standard Client Replace Task Sequence from the list of Task Sequences.

·         Didn’t work and ended up with errors. Solution was to do some modifications to CustomSettings.ini:

 

DeploymentType=REPLACE

UserDataLocation=AUTO

UDShare=\\10.0.0.1\MigData

UDDir=%ComputerName%

 

Using this modification, the User Data got stored in MigData on the wds-server.

Note: I could also have used the method described later, which is removing stuff from Customsettings.ini…

Source: http://social.technet.microsoft.com/Forums/en-US/mdt/thread/9b9d32c3-4805-4264-95a3-51e90b24bfb7

 

·         I now ran a Standard Client Task Sequence to do a new installation and to restore the user data from MigData.  Result: Standard Client Task Sequence did NOT restore the user data.

 

·         Had to some more reading about the subject, starting with: http://deployment.xtremeconsulting.com/2009/11/20/understanding-usmt-with-mdt-2010/

“The Client Deployment Wizard will ask if you want to restore user state and where the user state is stored.  The Restore User State step in the task sequence would then use USMT to restore the user state to the computer being deployed”.

 

This was not true in my case, the Wizard didn’t ask me anything. Time to check why.

 

·         Even more reading in:

http://allcomputers.us/windows_7/designing-a-lite-touch-deployment-%28part-2%29—deploying-images-to-target-computers.aspx

·         Easiest solution for me was to remove all the automatic stuff I had added in Customsettings.ini. I changed (commented out) the following so I could manually answer the questions:

;SkipBDDWelcome=YES

;SkipDeploymentType=YES

;DeploymentType=NEWCOMPUTER

:UserDataLocation=AUTO

;UDShare=\\10.0.0.1\MigData

;UDDir=%ComputerName%

;SkipUserData=Yes

·         I ran the replace task sequence from the source computer again. I now had the option to tell mdt where to save the backup and whether I wanted to restore the user data into the new installation. I saved the files to the wds-server.

·         Created a new virtual machine and deployed Windows via a Standard Client Task Sequence. Manually answered questions in the wizard. I now had the option to restore the user data.

·         Success Smile

·         (I later noticed that SkipUserData & SkipDeploymentType were the correct options to solve my little mystery. I don’t mind answering a couple of questions and I don’t have the need for UDShare and UDDir etc automatically defined).

Source: http://allcomputers.us/windows_7/designing-a-lite-touch-deployment-%28part-3%29—customizing-target-deployments.aspx )

 

There’s also an UPGRADE installation/deployment option, but I won’t test it because we do not have the need for it. You can’t upgrade from WinXP to Win7/8 so in our case it’s no use.

 

 

 

 

Windows 8 Deployment

 

I tried deploying a plain and a production image of Windows 8 also. It’s just about the same procedure as with Windows 7, but you have to uninstall WAIK and install the new Windows Assessment and Deployment Kit (Windows ADK) for Windows 8 for proper deployment.

Also, update your deployment share and copy over the new boot image to the wds server (ADK uses a new version of Windows PE).

Other than that, everything seems to be working including Task Sequences and so on.

 

Note:

 

Tried (successful) Win 8-deployment (4.3.2013) and here are a couple of other problems:

 

http://support.microsoft.com/kb/977512

http://msitpros.com/?p=1290

 

with these problems fixed everything seemed to be working just fine. (Actually uninstalled DNS completely as I didn’t need it)

 

 

Note 2:

 

I’ve now (5.3.2013) moved over to better driver management with selection profiles.

Good article about this:

http://www.deployvista.com/Default.aspx?tabid=78&EntryID=132

 

 

Note 3:

 

Learn how to deploy with UEFI in my post Converting a windows 8 BIOS Installation to UEFI 

 

 

 

 That’s it for this document. It’s been fun and I’ve learned a lot Smile

 

 

 

Sources:

 

Mentioned in the text.