Notes about open source software, computers, other stuff.

Category: Linux (Page 8 of 9)

Using rsync to backup to a remote Synology Diskstation

An updated version of the script can be found here.

I recently bought a NAS, a Synology DiskStation DS211j and stuffed two 1TB disks in it. I configured the disks to be in RAID 1 (mirrored) in case one of them decides to die. I then brought the NAS to a family member’s house and installed it there. Now she uses it to back up her important files (and as a storage tank for music and videos).

The good thing for me is that I can now make off-site backups of my home directories. I configured the DS211j to accept SSH connections so that I can log into it (as user admin or root). I used the web interface to create a directory for my backups (which appeared to be /volume1/BackupLennart after logging in with SSH).

After making a hole in her firewall that allowed me to connect to the DS211j, I created a backup script in /etc/cron.daily with the following contents:

#!/bin/bash
#
# This script makes a backup of my home dirs to a Synology DiskStation at
# another location. I use LVM for my /home, so I make a snapshot first and
# backup from there.
#
# Time-stamp: <2011-02-06 21:30:14 (lennart)>
 
###############################
# Some settings
###############################
 
# LVM options
VG=raidvg01
LV=home
MNTDIR=/mnt/home_rsync_snapshot/
 
# rsync options
DEST=root@remote-machine.example.com:/volume1/BackupLennart/
SRC=${MNTDIR}/*
OPTIONS="-e ssh --delete --progress -azvhHS --numeric-ids --delete-excluded "
EXCLUSIONS="--exclude lost+found --exclude .thumbnails --exclude .gvfs --exclude .cache --exclude Cache"
 
 
 
###############################
# The real work
###############################
 
# Create the LVM snapshot
if [ -d $MNTDIR ]; then
    # If the snapshot directory exists, another backup process may be
    # running
    echo "$MNTDIR already exists! Another backup still running?"
    exit -1
else
    # Let's make snapshots
    mkdir -p $MNTDIR
    lvcreate -L5G -s -n snap$LV /dev/$VG/$LV
    mount /dev/$VG/snap$LV $MNTDIR
fi
 
 
# Do the actual backup
rsync $OPTIONS $EXCLUSIONS $SRC $DEST
 
# Remove the LVM snapshot
if [ -d $MNTDIR ]; then
    umount /dev/$VG/snap$LV
    lvremove -f /dev/$VG/snap$LV
    rmdir $MNTDIR
else
    echo "$MNTDIR does not exist!"
    exit -1
fi

Let’s walk through it: in the first section I configure several variables. Since I use LVM on my server, I can use it to make a snapshot of my /home partition. The LVM volume group I use is called ‘raidvg01’. Withing that VG my /home partition resides in a logical volume called ‘home’. The variable MNTDIR is the place where I mount the LVM snapshot of ‘home’.

The rsync options are quite straight forward. Check the rsync man page to find out what they mean. Note that I used the --numeric-ids option because the DS211j doesn’t have the same users as my server and this way all ownerships will still be correct if I ever need to restore from this backup.

In the section called “The real work” I first create the MNTDIR directory. Subsequently I create the LVM snapshot and mount it. After this the rsync backup can be run and finally I unmount the snapshot and remove it, followed by the removal of the MNTDIR.

Since the script is placed in /etc/cron.daily it will be executed every day. Since we use SSH to connect to the remote DS211j I set up SSH key access without a password. This Debian howto will tell you how to set that up.

The only thing missing in this setup is that the backups are not stored in an encrypted form on the remote NAS, but for now this is good enough. I can’t wait until the network bandwidth on both sides of this backup connection get so fast (and affordable) that I can easily sync my music as well. Right now uploads are so slow that I hardly dare to include those. I know that I shouldn’t complain since the Netherlands has one of the highest broadband penetrations in the world, but, hey, don’t you just always want a little more, just like Oliver Twist?

Related Images:

Enable incremental-search-forward in Bash

I just read Ruslan Spivak’s blog posting on how to get Ctrl-s (which is bound to incremental-search-forward in Emacs) working to search incrementally through the command history in Bash.

Normally this behaviour is shadowed by a terminal flow-control key binding. To turn that off and ‘reveal’ the search-forward function, simply type

stty -ixon

(of course, by adding this line to your ~/.bashrc file makes it permanent).

Great to get this working! Thanks Ruslan.

Related Images:

Recompiling the quota package in CentOS so that it can use LDAP to find email adresses

Today I compiled my first RPM package from source :-)! But let’s start at the beginning…

At work I recently implemented disk quota on our server. While trying to setup /etc/warnquota.conf I noticed the example lines at the bottom that showed how to configure warnquota to look up e-mail addresses in an LDAP directory. This was exactly what I needed, because we store our user’s e-mail address in our LDAP tree. Without this feature warnquota would try to send its warning mails to user@our-server.example.com instead of (or even other addresses for guests that only visit us for a few weeks). The lines in /etc/warnquota.conf were:

LDAP_MAIL = true
LDAP_HOST = ldap.example.com
LDAP_PORT = 389
LDAP_BASEDN = ou=Users,dc=example,dc=com
LDAP_SEARCH_ATTRIBUTE = uid
LDAP_MAIL_ATTRIBUTE = mail
LDAP_DEFAULT_MAIL_DOMAIN = example.com

So, after saving the file I tested it by running warnquota -s (as root, and I also made sure I reduced my own quota so I would be the one getting an e-mail warning).

Unfortunately warnquota spitted out some errors:

warnquota: Error in config file (line 65), ignoring
warnquota: Error in config file (line 66), ignoring
warnquota: Error in config file (line 67), ignoring
warnquota: Error in config file (line 68), ignoring
warnquota: Error in config file (line 69), ignoring
warnquota: Error in config file (line 70), ignoring
warnquota: Warning: Mailer exitted abnormally.

These were the line numbers with the LDAP options above :-(. Google pointed me to an old bug in Fedora that was marked as resolved. I also found out that the quota tools should be compiled with LDAP support for this to work. To be sure that it was actually possible I configured warnquota on my home server that runs Ubuntu 10.04 and also uses LDAP. There, it all worked as expected.

So, my next step was clear: make my own RPM package for quota. The one installed by CentOS 5.4 is quota-3.13-1.2.5.el5. These are the steps I took:

  • Enable the CentOS source repository by creating the file etc/yum.repos.d/CentOS-Source.repo with this contents:
    [centos-src]
    name=CentOS $releasever - $basearch - Source
    baseurl=http://mirror.centos.org/centos/$releasever/os/SRPMS/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

    The run yum update and check that the new repository is listed.

  • Install the yum-utils and rpmdevtools packages: sudo yum install yum-utils rpmdevtools.
  • Set up a directory to do your build work in. I created the directory ~/tmp/pkgtest.
  • Run rpmdev-setuptree to create the required sub directories.
  • Set the basic build configuration by creating the file ~/.rpmmacros with the following contents:
    # Path to top of build area
    %_topdir    /home/lennart/tmp/pkgtest
  • Go into the SRPMS directory and download the source package: yumdownloader --source quota
  • In the top level directory run rpm -i SRPMS/quota-3.13-1.2.5.el5.src.rpm to unpack the package.
  • The SPECS directory now contains the .spec file that contains the build instructions. The SOURCES directory contains the source files and patches from Red Hat. In a temporary directory I untar-ed the quotatools source tar.gz file and ran ./configure --help to find out which option I should add to the spec file in order to enable LDAP lookup. The option was: --enable-ldapmail=yes. The set of configure lines in the spec file now looked like this:
    %build
    %configure \
    	--with-ext2direct=no --enable-rootsbin --enable-ldapmail=yes
    make

    In the spec file I also added a changelog entry:

    * Mon Oct 18 2010 Lennart Karssen <lennart@karssen.org 1:3.13-1.2.6
    - Added --enable-ldapmail=try to the ./configure line to enable LDAP
      for looking up mail addresses. (Resolves Red Hat Bugzilla 133207,
      it is marked as resolved there, but apparently was reintroduced.)

    And I also bumped the build version number at the top of the file (the Release: line). Finally, I added openldap-devel to the BuildPreReq line (of course I ran into a compilation error first and then installed the openldap-devel package :-)).

  • Now it’s time to build the package. In the SPEC directory run: rpmbuild -bb quota.spec and wait. The RPM package is created in the RPMS directory.
  • Install the package: sudo rpm -Uvh RPMS/x86_64/quota-3.13-1.2.6.x86_64.rpm (if you didn’t bump the package version number the --replacepkgs must be added to ‘upgrade’ to the same version).

And that was it! The package installed cleanly and a test run of warnquota -s was successful.

Related Images:

Nagios event handlers for services on remote machines

Part of my work consists of managing the servers on which we do our data analysis. At the moment we’ve got two servers and one virtual machine running. The VM is used as a management server, it runs things like Nagios, Cacti, Subversion, etc.

Today I implemented Nagios event handlers in this setup. The idea behind an event handler is the following: If e.g. a service goes down, Nagios should try to solve this problem itself before notifying the administrator (me). It should, in this case, simply try to restart the service.

The Nagios documentation [1] describes how to do this for a service that runs on the same machine as the Nagios service. In my case, however, the services are running on the to real servers. To me it seemed logical to use NRPE to execute the necessary commands on the remote hosts (since NRPE was already running on those machines anyway).
In order to adapt the scheme from the Nagios docs to work on remote servers as well three things need to be done:

  • The command that is executed by the event handler script should be changed to use NRPE
  • On the remote machine the nagios user (under which the NRPE service is running) should be given some sudo rights so that it is actually allowed to start a service.
  • The NRPE configuration on the remote machine should of course be changed to include the new command(s) for starting services.

So here we go! First, the Nagios configuration on the management host. In the service definition file I added one line for the event handler to each service. The definition of one service now looks like this (the last line was added):

define service {
       use                      generic-service
       hostgroup_name           sge-exec-servers
       service_description      SGE execd
       check_command            check_nrpe_1arg!check_sge_execd
       notification_interval    0 ; set > 0 if you want to be renotified
       event_handler            restart-service!sge-execd
}

Next, the restart-service command must be defined. I did that in a file that I called /etc/nagios3/conf.d/event-handlers.cfg:

define command {
       command_name     restart-service
       command_line     /etc/nagios3/conf.d/event_handler_script.sh $SERVICESTATE$ $SERVICESTATETYPE $ $SERVICEATTEMPT$ $HOSTADDRESS$ $ARG1$ $SERVICEDESC$
}

The variable $ARG1$ here is the name of the service that needs to be restarted. In this example it is sge-execd from the event_handler line in the service definition. The $HOSTADDRESS will be used in the event handler script to send the right host name to NRPE.
The event_handler_script.sh referenced here is almost identical to the one in the Nagios documentation. As mentioned in the plan above, I changed it slightly so that it uses NRPE.

#!/bin/sh                                                                                            
#
# Event handler script for restarting the nrpe server on the local machine
# Taken from the Nagios documentation and
# http://www.techadre.com/sites/techadre.com/files/event_handler_script_0.txt
# Adapted by L.C. Karssen
# Time-stamp: <2010-09-14 15:24:33 (root)>
#
# Note: This script will only restart the nrpe server if the service is
#       retried 3 times (in a "soft" state) or if the web service somehow
#       manages to fall into a "hard" error state.
#
 
date=`date`
 
# What state is the NRPE service in?
case "$1" in
OK)
        # The service just came back up, so don't do anything...
        ;;
WARNING)
        # We don't really care about warning states, since the service is probably still running...
        ;;
UNKNOWN)
        # We don't know what might be causing an unknown error, so don't do anything...
        ;;
CRITICAL)
        # Aha!  The BLAH service appears to have a problem - perhaps we should restart the server...
 
        # Is this a "soft" or a "hard" state?
        case "$2" in
 
        # We're in a "soft" state, meaning that Nagios is in the middle of retrying the
        # check before it turns into a "hard" state and contacts get notified...
        SOFT)
                # What check attempt are we on?  We don't want to restart the web server on the firs\
t
                # check, because it may just be a fluke!
                case "$3" in
 
                # Wait until the check has been tried 3 times before restarting the web server.
                # If the check fails on the 4th time (after we restart the web server), the state
                # type will turn to "hard" and contacts will be notified of the problem.
                # Hopefully this will restart the web server successfully, so the 4th check will
                # result in a "soft" recovery.  If that happens no one gets notified because we
                # fixed the problem!
                3)
                        echo -n "Restarting service $6 (3rd soft critical state)...\n"
                        # Call NRPE to restart the service on the remote machine
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        echo "$date - restart $6 - SOFT"  >> /tmp/eventhandlers
                        ;;
                        esac
                ;;
 
        # The service somehow managed to turn into a hard error without getting fixed.
        # It should have been restarted by the code above, but for some reason it didn't.
        # Let's give it one last try, shall we?
        # Note: Contacts have already been notified of a problem with the service at this
        # point (unless you disabled notifications for this service)
        HARD)
                case "$3" in
 
                4)
                        echo -n "Restarting $6 service...\n"
                        # Call the init script to restart the NRPE server
                        echo "$date - restart $6 - HARD"  >> /tmp/eventhandlers
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        ;;
                        esac
                ;;
        esac
        ;;
esac
exit 0

Now Nagios can be restarted and should continue its work as usual. Time to make the changes on the remote hosts.

First, we’ll grant the necessary sudo rights to the nagios user. Run visudo and add these lines:

## Allow NRPE to restart sevices
User_Alias NAGIOS = nagios,nagcmd
Cmnd_Alias NAGIOSCOMMANDS = /usr/sbin/service
Defaults:NAGIOS !requiretty
NAGIOS    ALL=(ALL)    NOPASSWD: NAGIOSCOMMANDS

And finally add the required lines in the NRPE config file (/etc/nagios/nrep.cfg):

command[restart-sge-execd]=/usr/bin/sudo /usr/sbin/service gridengine-exec start

Restart the NRPE daemon and it should all work. Test it by manually stopping the service.

[1] Nagios documentation on Event Handlers
[2] Two blog posts that describe a similar set up. I used these as a starting point for my own set up.

Related Images:

Lenovo Thinkpad X100e and Ubuntu 10.04

About a month ago I bought a Lenovo Thinkpad X100e laptop. Well, maybe laptop is a bit too big a word for it. Size-wise it’s more like a netbook with its screen diagonal of 11.6″. Performance-wise however, it’s much better. The one I’ve got has an AMD Turion Neo X2 L625 dual core processor running at a maximum of 1.6GHz and 2GB of RAM. It’s a nifty little machine that serves my needs: doing some work on the train to and from work, or while being on conferences.

I took quite some time to look around for a laptop like this, and this Thinkpad seems to be the only one that satisfies my minimum requirements:
– Matte screen; no glossy screens for me, I’ve already got a mirror in my bathroom :-).
– Trackpoint; yep, that’s the red dot in between the G, H, and B keys.
– A processor that was more powerful than Intel’s Atom
– A decent keyboard, because for me, using Linux means using the command line and Emacs a lot.

After several weeks of use I’ve found only one drawback to this machine: it’s processor is not that efficient. It uses quite some power and therefore gets a bit hot. As a result the fan runs a lot (even though it’s not that audible) and battery life is not too good. I’m getting approximately 2 to 3 hours out of it if I reduce the screen brightness and turn wifi off. That could have been better (maybe Lenovo should have used an Intel CULV processor?), but it’s not too much of a limitation. But this came at no surprise, most reviews on the web mention it.

After opening the box I quickly made an image of the Windows partitions that were on it and then proceeded to install Ubuntu 10.04 on it. Most of the hardware was recognised by the 2.6.32 kernel included with Ubuntu’s 10.04 release. However, as several blogs (see links below) pointed out there are a few bumps, e.g. with suspend and resume, or the wireless chip that is able to connect, but doesn’t want to send or receive data. The bumps were smoothed out by installing a newer kernel (2.6.35-12-generic) from the Ubuntu kernel PPA. The 2.6.35 kernel is the one that will be used in the next Ubuntu release and the PPA contains packages that make this kernel run in the present release as well. With that kernel, suspend and hibernate run well, as well as most Fn function keys. In fact, the only one that doesn’t seem to work is Fn+F3 for microphone mute. I had to turn on the bluetooth module in Windows before it showed up in Ubuntu (as noted by several blogs). At the moment, the things that don’t work correctly are:
– The microphone doesn’t record (neither in the sound recorder, nor when using Skype). Sometimes it shows some activity if the mic-volume slider is moved to about 25%, but I couldn’t get that to work reliably.
– The combined mic/headphone jack doesn’t mute the speakers if a pair headphones is plugged in (neither is any sound heard through the headphones).
Maybe a newer ALSA release in the upcomming Ubuntu 10.10 will remedy these problems.

I was pleasantly surprised by the fact that using the open source radeon driver (installed by default) for the AMD/ATI graphics card worked out of the box, including Compiz 3D desktop fancy stuff. The VGA out also worked perfectly when I hooked it up to my Sony Bravia TV. Xorg’s RandR detected it and I could choose between an extended desktop or a clone setup.

As I already mentioned, I’m a trackpoint user, so I wanted to disable the touchpad, especially since the two buttons for it are located at the front edge of the laptop and are easily pressed when the device sits on your lap and you’ve got your knees pulled up.
Secondly I enabled wheel emulation for the trackpoint. Now, if I click and hold the middle ‘mouse’ button and push the trackpoint in a certain direction it acts as a scroll wheel. To achieve this I created the file /usr/lib/X11/xorg.conf.d/20-thinkpad.conf (EDIT: for Ubuntu 10.10 this file should be located in /usr/share/X11/xorf.conf.d/) with the following contents:

Section "InputClass"
	Identifier "Trackpoint Wheel Emulation"
	MatchProduct "Trackpoint"
	MatchDevicePath "/dev/input/dev*"
	Driver "evdev"
	Option "EmulateWheel" "true"
	Option "EmulateWheelButton" "2"
	Option "Emulate3Buttons" "3"
	Option "XAxisMapping" "6 7"
	Option "YAxisMapping" "4 5"
EndSection	

All in all I’m very happy with the X100e. It’s a small but sturdy laptop with an excellent screen and a great keyboard.

Some links:
An excellent review of the Lenovo Thinkpad X100e
A recent review at AnandTech
Ubuntu kernel PPA
ThinkWiki page for the X100e, has lots of info on running Linux on this laptop.
A blog about installing Ubuntu Linux on the X100e, the problems mentioned in that post and its comments have now been solved (if you install the 2.6.35 kernel from the PPA). I tried the gpointing-device-settings package for some time (to get the trackpoint scroll functionality to work), but its settings didn’t survive across reboots or even after hibernating, so I removed it again.

Related Images:

Linux, the Logitech Trackman Marble and emulating a scroll wheel

At work I recently came across a trackball. It was about to be thrown away and since I’d never really used one I decided to take it home and try it out. It’s a Logitech Trackman Marble, still for sale on Logitech’s website.

The trackball features four buttons: two large ones for left and right-clicking and to smaller ones that work as back and forward buttons in Firefox, for example.

After plugging it into my PC it was instantly recognised by X (I’m using Ubuntu 10.04). There’s no middle mouse button, but that can be emulated by clicking the left and right mouse buttons at the same time (something I’ve been use to on older laptops, and, well, even from the time that some of the mouses I owned only had two buttons). However, I did miss my scroll wheel. A quick search on the Internet brought me to Rob Meerman’s website where he explains a lot about the Trackman and how it works in X. He even has a special section on Ubuntu 10.04. In short it comes down to these commands:

xinput set-int-prop "Logitech USB Trackball" "Evdev Wheel Emulation Button" 8 8
xinput set-int-prop "Logitech USB Trackball" "Evdev Wheel Emulation" 8 1

Unfortunately the changes made by these commands are not persistent across reboots. I’ll try to fix that later.

EDIT: To add middle mouse button emulation and horizontal scrolling (thanks to rejistania below) run:

xinput set-int-prop "Logitech USB Trackball" "Evdev Middle Button Emulation" 8 1
xinput set-prop "Logitech USB Trackball" "Evdev Wheel Emulation Axes" 6 7 4 5

END EDIT

Regarding the use of a trackball compared to an ordinary mouse my experiences so far have been very positive. It didn’t take me a lot of time to get used to it. Also precision placement of the pointer doesn’t seem to be more difficult that with a regular mouse. So for now my wireless Logitech mouse can take a holiday :-). The nicest think about the trackball is the fact that you don’t have to move the whole device. So it’s less ‘weight lifting’. Also, the fact that the ball (in combination with the small button) is the scroll wheel, makes for a relatively heavy wheel without much friction, so scrolling large distances can simply be done by giving the ball a good spin. Nice!

Related Images:

Script that converts a Squirrelmail address book to vcf format

Today I installed Roundcube as a replacement for my Squirrelmail webmail setup. All went well, but Roundcube only accepts adresses in vCard format (.vcf-format), whereas Squirrelmail only exports in CSV format. To solve this I wrote the following script that converts a Squirrelmail address book to vCard format. The results are sent to stdout, so run it like this: abook2vcf.awk user@example.com.abook > my_addresses.vcf.
On my Debian install the Squirrelmail address books (files ending in .abook are located in /var/lib/squirrelmail/data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/awk
#
# This script converts a Squirrelmail address book to 
# vcards for import in Roundcube for example.
BEGIN{
    FS="|"
}
 
{
    full_name  = $1;
    first_name = $2;
    last_name  = $3;
    email      = $4;
 
    print("BEGIN:VCARD");
    print("VERSION:3.0");
    printf("FN:%s\n", full_name);
    printf("N:%s;%s;;;\n", last_name, first_name);
    printf("EMAIL;type=INTERNET;type=HOME;type=pref:%s\n", email);
    print("END:VCARD");
}

Related Images:

Using Windows AD for Apache authentication

Recently I was setting up a Subversion repository (on a Linux server) that needs to be accessed via HTTP. Users should be able connect to the repositories without authentication, but authentication is needed to perform write actions. Of course Apache’s htpasswd/htaccess combination would provide just that, but since we have a Windows 2008 Active Domain controller that provides authentication to our Windows machines I thought it would be a good idea to use it.

Configuration of the autentication and authorization is done by Apache’s mod_authnz_ldap and (on Red Hat EL) configured in /etc/httpd/conf.d/subversion.conf (which exists after installing the subversion package with yum.

Basic configuration with htaccess
For simple authentication with Apache’s htaccess mechanism the config looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
   DAV svn
   SVNParentPath /var/www/svn
   SVNReposName "My company's repository"
 
   # Limit write permission to list of valid users.
   <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization Realm for SVN"
      AuthUserFile /etc/httpd/conf.d/svn_htpasswd
      Require valid-user
 
   </LimitExcept>
</Location>

After using htpasswd to create a file with usernames and passwords on the server users could commit to the repository.

Configuration for AD Global Catalog
The first LDAP-like construction I got working was when using the AD Global Catalog. Normal LDAP traffic uses port 389, but the AD’s Global Catalog uses port 3268. The username needed to commit with SVN is windows_logon_name@your_AD.suffix, the so-called userPrincipalName. Here, your_AD and suffix are the DC’s of the LDAP/AD tree. By using this userPrincipalName users from different DC trees can be authenticated. The configuration file looks this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
    DAV svn
    SVNParentPath /var/www/svn
    SVNReposName "My company's repository"
 
    # Limit write permission to list of valid users.
    <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization using your LDAP account"
      AuthBasicProvider ldap
      AuthzLDAPAuthoritative off
      # Active Directory requires an authenticating DN to access records
      AuthLDAPBindDN "svntest@your_AD.suffix"
 
      # This is the password for the AuthLDAPBindDN user in Active Directory
      AuthLDAPBindPassword "some_good_password"
 
      # The LDAP query URL
      AuthLDAPURL "ldap://ldap.your_company.com:3268/?userPrincipalName?sub"
      AuthUserFile /dev/null
 
      # Require a valid user
      Require valid-user
    </LimitExcept>
</Location>

With this configuration I could commit with this command: svn commit -m "First AD test" --username your_windows_username@your_AD.suffix.

Configuration for AD + Windows logon Name
As mentioned earlier, the previous method allows people from different parts of the AD tree to log in. In order to restrict access to for example a specific OU, the AuthLDAPURL has to be changed. In our case the LDAP tree is not a simple OU=Users,DC=our_company,DC=com, but consists of several nested OU structures. I used the adsiedit.msc snapin (ADSI editor) on the AD controller to find out the exact structure, since I needed to find out which parts were CNs and which where OUs.
In order to authenticate against a the windows login names in a certain sub-OU the AuthLDAPURL is

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1, DC=your_AD, DC=suffix?sAMAccountName?sub?(objectClass=*)"

Configuration for AD + Windows Display Name
If you want the users to use their common name (the Display Name in the AD) use:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?cn"

Users can now commit with: svn commit -m "Another AD test" --username "Firstname Lastname".

Configuration for AD + another field
In our case login authentication on the Linux/UNIX machines is not done through the AD. Furthermore, the user names are not synchronised between Linux and Windows. This poses a small inconvenience, since by default an svn commit uses the Linux username. As the AD doesn’t know about this name, the first authentication fails. subsequently Apache asks for the user name, and then the user can enter his Windows AD credentials (principle name, display name or windows login name, depending on which of the above configurations was chosen). So as a quick workaround (and just to see if I could get it to work) I entered my Linux user name into the Office field in the AD. In the ADSI Editor I found the real name of the field: physicalDeliveryOfficeName With the following AuthLDAPURL I could use the Office field to authenticate me:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?physicalDeliveryOfficeName"

Now a simple svn commit works.

Some useful links:

Related Images:

Script to tunnel RDP connections through stepping stone server using SSH

In order to connect to the servers at work we need to connect through a stepping stone host (via SSH). Since some of the servers are MS Windows machines which can be managed via the Remote Desktop Protocol (RDP), this traffic needs to be tunneled over SSH as well.
I wrote the following bash script to automate setting up the tunnel. It sets some default variables and then looks for an available port between 1234 and 1254 (chosen completely arbitrarily) and uses it for the tunnel. Then it uses the rdesktop program to start the RDP connection.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
#!/bin/bash
#
# This script makes an ssh tunnel to a stepping stone server
# and uses it to start an rdesktop connection to the machine 
# given as the first argument of the script. 
#
# (C) L.C. Karssen
# $Id: winremote.sh,v 1.14 2010/02/10 13:03:08 lennart Exp $
#
 
# User-configurable variables
ssh_username=your_steppingstone_username_here
steppingstone=steppingstone.your_company.com
rdesktop_username=your_windows_username_here
rdesktop_domain=your_windows_domain_here
rdesktop_opts="-z -g 1024x768 -a 16"
rdesktop_port=3389 # This is the standard MS rdesktop port
 
 
# For ordinary users it should not be necessary to change anything below this line. 
# Some functions:
usage()
{
    cat <<EOF
Usage:
    $program [options] rdesktop_hostname 
 
Make a remote desktop connection through an SSH tunnel.
 
Options: 
    -h, --help                                   print this help message
    -s, --steppingstone steppingstone_hostname   set other stepping stone host
                                                   (default: $steppingstone)
    -t, --timeout sec                            set timeout (default: 1)
    -v, --verbose                                verbose output
     --version                                   print version
 
Note that some customisations need to be made in the first few lines of this 
script (e.g. user names and other defaults)
EOF
}
 
program=`basename $0`
 
# Command line option parsing. Shift all options 
verbose=
timeout=1
 
while [ $# -gt 0 ]
do 
    case $1 in
	-v | --verbose | -d | --debug ) 
	    verbose=true
	    ;;
	--version )
	    echo '$Revision: 1.14 $'
	    exit 0
	    ;;
	-t | --timeout ) 
	    shift
	    timeout="$1"
	   if [ $timeout -lt 1 ]; then
	       timeout=1
	   fi
	   if [ $verbose ]; then
	       echo "Timeout set to $timeout"
	   fi
	   ;;
	-s | --steppingstone ) 
	   shift
	   steppingstone="$1"
	   if [ $verbose ]; then
	       echo "Steppingstone server is $steppingstone"
	   fi
	   ;;
	-h | --help ) 
	   usage
	   exit 0
	   ;;
	-*) 
	   echo "$0: invalid option $1" >&2
 	   usage
	   exit 1
	   ;;
	*) 
	   break
	   ;;
    esac
    shift
done
 
# Server name (as seen on the steppingstone) that we want to connect to:
rdesktop_server=$1 
 
################### Config done, let's get to work ########################
 
# Simple usage description
if [ "$rdesktop_server" == "" ]; then
    echo "Error: No rdesktop host given" >&2
    usage
    exit 1
fi
 
# Find a free port on the local machine that we can use to connect through
min_port=1234
max_port=1254
used_ports=(`netstat -tan | awk '{print $4}' | grep 127.0.0.1 | awk -F: '{print $2}' | sort -g`)
if [ $verbose ]; then
    echo "Used ports are: ${used_ports[@]}"
fi
 
# In the next line we first print the $used_ports as an array, but with 
# each port on a single line. This is then piped to an awk script that 
# puts all the values in an array and subsequently walks through all ports 
# from $min_port to $max_port in order to find the first port that is not 
# in the array. This port is printed.
local_port=`printf "%i\n" ${used_ports[@]} | \
    awk -v minp=$min_port -v maxp=$max_port \
    '{ array[$1]=1 } END { for (i=minp; i<=maxp; i++) { if (i in array) continue; else { print i; break } } }'`
if [ "$local_port" == "" ]; then
    echo "Error: No ports free! Exiting..." >&2
    exit 2
fi
if [ $verbose ]; then
    echo "Selected port was: $local_port"
fi
 
# Create tunnel:
if [ $verbose ]; then
    echo "Creating SSH tunnel..."
fi
ssh_opts="-f -N -L"
ssh $ssh_opts $local_port:$rdesktop_server:$rdesktop_port \
    $ssh_username@$steppingstone 
 
# Allow the ssh tunnel to be established
sleep $timeout
 
# Abort if tunnel has not been established
pidof_ssh=`pgrep -f "ssh $ssh_opts $local_port"`
if [ "$pidof_ssh" == "" ]; then
    echo "Error: Timeout while establishing tunnel" >&2
    echo "Exiting..."
    exit 3
fi
 
# Make rdesktop connection
if [ $verbose ]; then
    echo "Opening Remote desktop connection to $rdesktop_server..."
fi
rdesktop $rdesktop_opts -u $rdesktop_username -p - \
    -d $rdesktop_domain localhost:$local_port
 
# Clean up tunnel
if [ $verbose ]; then
    echo "Cleaning up SSH tunnel with pid $pidof_ssh and local port $local_port"
fi
kill $pidof_ssh

Related Images:

Cloning Ubuntu virtual machines: some problems (and solutions)

Yesterday I set up a KVM virtual machine on my new Ubuntu 9.10 server. The VM also ran Ubuntu 9.10 server. In order to do some performance tests (what would be the speed up of having the VM’s disks on an LVM LV on the host, compared to having them in a file on the host) I used virt-clone to clone the machine:

virt-clone --connect=qemu:///system -o testldap -n testldap-lvm -f testldap-lvm/ubuntu-kvm/disk0.img

This clones the VM named testldap to testldap-lvm and put its disk file in the subdirectory testldap-lvm/ubuntu-kvm/. After that I still had to convert this image file to it’s location in an LV, but that’s not what this post is about.

As the machine is cloned, the MAC address of its virtual NIC is also changed. The ‘source’ VM had 52:54:00:f2:cc:40, the new VM was given 00:16:36:46:34:42. As I booted the new VM I noticed it wouldn’t come up as expected. I couldn’t reach it via the fixed IP that I had given the source VM (even though the source VM was shut down, of course). Closer inspection revealed that the interface name for the NIC in the new VM had changed. I vaguely remembered that Debian-derived distro’s do that: because they don’t want NIC name assignments (eth0, eth1, etc.) to change if a new network adapter is added, they tie a name to a MAC address. And, as noted, the MAC address had indeed changed in the cloning process.

The assignments between MAC and eth? name are recorded in the file /etc/udev/rules.d/70-persistent-net.rules. They are set by the script /lib/udev/write_net_rules, so I removed the execute permissions on that file. However, this was not a clean solution, since it resulted in an error on start up. I found that editing /lib/udev/rules.d/75-persistent-net-generator.rules is a far better solution. Adding the lines

# ignore KVM virtual interfaces
ENV{MATCHADDR}=="52:54:00:*", GOTO="persistent_net_generator_end"
# This seems to be the range used by Xen, but also by virt-clone
ENV{MATCHADDR}=="00:16:36:*", GOTO="persistent_net_generator_end"

seems to do the trick (don’t forget to remove the rules already added in /etc/udev/rules.d/70-persistent-net.rules). Make sure to add them after the lines

# read MAC address
ENV{MATCHADDR}="$attr{address}"

so that the variable MATCHADDR has a value. I documented this solution in the Ubuntu bug report that seemed the most appropriate as well.

This solved one problem. Then the next problem reared its ugly head: Both the source VM and the clone refused to finish their boot process, they kept hanging on the NFS mounts defined in /etc/fstab. The only option mountall gave was to enter the root password (after pressing ESC) or type Crtl-D to continue. Doing the latter resulted in nothing but an infinite wait. In an Ubuntu bug report I found that using DHCP for the network interface would solve the problem. And, indeed it did. However, since I want static IP addresses for my servers this was not a solution that I liked. Much to my surprise the NFS mounts worked perfectly after changing the interface (in /etc/network/interfaces) back to static. I don’t know why, but on both VMs I set the configuration for eth0 from static to dhcp, rebooted, changed it back to static and rebooted again to find the problem solved… Strange!

Update 2009-12-18:
As it turns out, the solution to the mount problem doesn’t always work. I tried it again, but now it failed to work after switchting back from DHCP to a static IP. I guess it has something to do with the lease time of the IP, because in the case I described above there was a night between using the DHCP IP and turning static back on. So somewhere, something needs to time out before switching back from DHCP to static IPs works again.

Related Images:

« Older posts Newer posts »

© 2024 Lennart's weblog

Theme by Anders NorĂ©nUp ↑