Notes about open source software, computers, other stuff.

Tag: sysadmin (Page 5 of 5)

Recompiling the quota package in CentOS so that it can use LDAP to find email adresses

Today I compiled my first RPM package from source :-)! But let’s start at the beginning…

At work I recently implemented disk quota on our server. While trying to setup /etc/warnquota.conf I noticed the example lines at the bottom that showed how to configure warnquota to look up e-mail addresses in an LDAP directory. This was exactly what I needed, because we store our user’s e-mail address in our LDAP tree. Without this feature warnquota would try to send its warning mails to user@our-server.example.com instead of (or even other addresses for guests that only visit us for a few weeks). The lines in /etc/warnquota.conf were:

LDAP_MAIL = true
LDAP_HOST = ldap.example.com
LDAP_PORT = 389
LDAP_BASEDN = ou=Users,dc=example,dc=com
LDAP_SEARCH_ATTRIBUTE = uid
LDAP_MAIL_ATTRIBUTE = mail
LDAP_DEFAULT_MAIL_DOMAIN = example.com

So, after saving the file I tested it by running warnquota -s (as root, and I also made sure I reduced my own quota so I would be the one getting an e-mail warning).

Unfortunately warnquota spitted out some errors:

warnquota: Error in config file (line 65), ignoring
warnquota: Error in config file (line 66), ignoring
warnquota: Error in config file (line 67), ignoring
warnquota: Error in config file (line 68), ignoring
warnquota: Error in config file (line 69), ignoring
warnquota: Error in config file (line 70), ignoring
warnquota: Warning: Mailer exitted abnormally.

These were the line numbers with the LDAP options above :-(. Google pointed me to an old bug in Fedora that was marked as resolved. I also found out that the quota tools should be compiled with LDAP support for this to work. To be sure that it was actually possible I configured warnquota on my home server that runs Ubuntu 10.04 and also uses LDAP. There, it all worked as expected.

So, my next step was clear: make my own RPM package for quota. The one installed by CentOS 5.4 is quota-3.13-1.2.5.el5. These are the steps I took:

  • Enable the CentOS source repository by creating the file etc/yum.repos.d/CentOS-Source.repo with this contents:
    [centos-src]
    name=CentOS $releasever - $basearch - Source
    baseurl=http://mirror.centos.org/centos/$releasever/os/SRPMS/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

    The run yum update and check that the new repository is listed.

  • Install the yum-utils and rpmdevtools packages: sudo yum install yum-utils rpmdevtools.
  • Set up a directory to do your build work in. I created the directory ~/tmp/pkgtest.
  • Run rpmdev-setuptree to create the required sub directories.
  • Set the basic build configuration by creating the file ~/.rpmmacros with the following contents:
    # Path to top of build area
    %_topdir    /home/lennart/tmp/pkgtest
  • Go into the SRPMS directory and download the source package: yumdownloader --source quota
  • In the top level directory run rpm -i SRPMS/quota-3.13-1.2.5.el5.src.rpm to unpack the package.
  • The SPECS directory now contains the .spec file that contains the build instructions. The SOURCES directory contains the source files and patches from Red Hat. In a temporary directory I untar-ed the quotatools source tar.gz file and ran ./configure --help to find out which option I should add to the spec file in order to enable LDAP lookup. The option was: --enable-ldapmail=yes. The set of configure lines in the spec file now looked like this:
    %build
    %configure \
    	--with-ext2direct=no --enable-rootsbin --enable-ldapmail=yes
    make

    In the spec file I also added a changelog entry:

    * Mon Oct 18 2010 Lennart Karssen <lennart@karssen.org 1:3.13-1.2.6
    - Added --enable-ldapmail=try to the ./configure line to enable LDAP
      for looking up mail addresses. (Resolves Red Hat Bugzilla 133207,
      it is marked as resolved there, but apparently was reintroduced.)

    And I also bumped the build version number at the top of the file (the Release: line). Finally, I added openldap-devel to the BuildPreReq line (of course I ran into a compilation error first and then installed the openldap-devel package :-)).

  • Now it’s time to build the package. In the SPEC directory run: rpmbuild -bb quota.spec and wait. The RPM package is created in the RPMS directory.
  • Install the package: sudo rpm -Uvh RPMS/x86_64/quota-3.13-1.2.6.x86_64.rpm (if you didn’t bump the package version number the --replacepkgs must be added to ‘upgrade’ to the same version).

And that was it! The package installed cleanly and a test run of warnquota -s was successful.

Related Images:

Nagios event handlers for services on remote machines

Part of my work consists of managing the servers on which we do our data analysis. At the moment we’ve got two servers and one virtual machine running. The VM is used as a management server, it runs things like Nagios, Cacti, Subversion, etc.

Today I implemented Nagios event handlers in this setup. The idea behind an event handler is the following: If e.g. a service goes down, Nagios should try to solve this problem itself before notifying the administrator (me). It should, in this case, simply try to restart the service.

The Nagios documentation [1] describes how to do this for a service that runs on the same machine as the Nagios service. In my case, however, the services are running on the to real servers. To me it seemed logical to use NRPE to execute the necessary commands on the remote hosts (since NRPE was already running on those machines anyway).
In order to adapt the scheme from the Nagios docs to work on remote servers as well three things need to be done:

  • The command that is executed by the event handler script should be changed to use NRPE
  • On the remote machine the nagios user (under which the NRPE service is running) should be given some sudo rights so that it is actually allowed to start a service.
  • The NRPE configuration on the remote machine should of course be changed to include the new command(s) for starting services.

So here we go! First, the Nagios configuration on the management host. In the service definition file I added one line for the event handler to each service. The definition of one service now looks like this (the last line was added):

define service {
       use                      generic-service
       hostgroup_name           sge-exec-servers
       service_description      SGE execd
       check_command            check_nrpe_1arg!check_sge_execd
       notification_interval    0 ; set > 0 if you want to be renotified
       event_handler            restart-service!sge-execd
}

Next, the restart-service command must be defined. I did that in a file that I called /etc/nagios3/conf.d/event-handlers.cfg:

define command {
       command_name     restart-service
       command_line     /etc/nagios3/conf.d/event_handler_script.sh $SERVICESTATE$ $SERVICESTATETYPE $ $SERVICEATTEMPT$ $HOSTADDRESS$ $ARG1$ $SERVICEDESC$
}

The variable $ARG1$ here is the name of the service that needs to be restarted. In this example it is sge-execd from the event_handler line in the service definition. The $HOSTADDRESS will be used in the event handler script to send the right host name to NRPE.
The event_handler_script.sh referenced here is almost identical to the one in the Nagios documentation. As mentioned in the plan above, I changed it slightly so that it uses NRPE.

#!/bin/sh                                                                                            
#
# Event handler script for restarting the nrpe server on the local machine
# Taken from the Nagios documentation and
# http://www.techadre.com/sites/techadre.com/files/event_handler_script_0.txt
# Adapted by L.C. Karssen
# Time-stamp: <2010-09-14 15:24:33 (root)>
#
# Note: This script will only restart the nrpe server if the service is
#       retried 3 times (in a "soft" state) or if the web service somehow
#       manages to fall into a "hard" error state.
#
 
date=`date`
 
# What state is the NRPE service in?
case "$1" in
OK)
        # The service just came back up, so don't do anything...
        ;;
WARNING)
        # We don't really care about warning states, since the service is probably still running...
        ;;
UNKNOWN)
        # We don't know what might be causing an unknown error, so don't do anything...
        ;;
CRITICAL)
        # Aha!  The BLAH service appears to have a problem - perhaps we should restart the server...
 
        # Is this a "soft" or a "hard" state?
        case "$2" in
 
        # We're in a "soft" state, meaning that Nagios is in the middle of retrying the
        # check before it turns into a "hard" state and contacts get notified...
        SOFT)
                # What check attempt are we on?  We don't want to restart the web server on the firs\
t
                # check, because it may just be a fluke!
                case "$3" in
 
                # Wait until the check has been tried 3 times before restarting the web server.
                # If the check fails on the 4th time (after we restart the web server), the state
                # type will turn to "hard" and contacts will be notified of the problem.
                # Hopefully this will restart the web server successfully, so the 4th check will
                # result in a "soft" recovery.  If that happens no one gets notified because we
                # fixed the problem!
                3)
                        echo -n "Restarting service $6 (3rd soft critical state)...\n"
                        # Call NRPE to restart the service on the remote machine
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        echo "$date - restart $6 - SOFT"  >> /tmp/eventhandlers
                        ;;
                        esac
                ;;
 
        # The service somehow managed to turn into a hard error without getting fixed.
        # It should have been restarted by the code above, but for some reason it didn't.
        # Let's give it one last try, shall we?
        # Note: Contacts have already been notified of a problem with the service at this
        # point (unless you disabled notifications for this service)
        HARD)
                case "$3" in
 
                4)
                        echo -n "Restarting $6 service...\n"
                        # Call the init script to restart the NRPE server
                        echo "$date - restart $6 - HARD"  >> /tmp/eventhandlers
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        ;;
                        esac
                ;;
        esac
        ;;
esac
exit 0

Now Nagios can be restarted and should continue its work as usual. Time to make the changes on the remote hosts.

First, we’ll grant the necessary sudo rights to the nagios user. Run visudo and add these lines:

## Allow NRPE to restart sevices
User_Alias NAGIOS = nagios,nagcmd
Cmnd_Alias NAGIOSCOMMANDS = /usr/sbin/service
Defaults:NAGIOS !requiretty
NAGIOS    ALL=(ALL)    NOPASSWD: NAGIOSCOMMANDS

And finally add the required lines in the NRPE config file (/etc/nagios/nrep.cfg):

command[restart-sge-execd]=/usr/bin/sudo /usr/sbin/service gridengine-exec start

Restart the NRPE daemon and it should all work. Test it by manually stopping the service.

[1] Nagios documentation on Event Handlers
[2] Two blog posts that describe a similar set up. I used these as a starting point for my own set up.

Related Images:

Script that converts a Squirrelmail address book to vcf format

Today I installed Roundcube as a replacement for my Squirrelmail webmail setup. All went well, but Roundcube only accepts adresses in vCard format (.vcf-format), whereas Squirrelmail only exports in CSV format. To solve this I wrote the following script that converts a Squirrelmail address book to vCard format. The results are sent to stdout, so run it like this: abook2vcf.awk user@example.com.abook > my_addresses.vcf.
On my Debian install the Squirrelmail address books (files ending in .abook are located in /var/lib/squirrelmail/data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/awk
#
# This script converts a Squirrelmail address book to 
# vcards for import in Roundcube for example.
BEGIN{
    FS="|"
}
 
{
    full_name  = $1;
    first_name = $2;
    last_name  = $3;
    email      = $4;
 
    print("BEGIN:VCARD");
    print("VERSION:3.0");
    printf("FN:%s\n", full_name);
    printf("N:%s;%s;;;\n", last_name, first_name);
    printf("EMAIL;type=INTERNET;type=HOME;type=pref:%s\n", email);
    print("END:VCARD");
}

Related Images:

Using Windows AD for Apache authentication

Recently I was setting up a Subversion repository (on a Linux server) that needs to be accessed via HTTP. Users should be able connect to the repositories without authentication, but authentication is needed to perform write actions. Of course Apache’s htpasswd/htaccess combination would provide just that, but since we have a Windows 2008 Active Domain controller that provides authentication to our Windows machines I thought it would be a good idea to use it.

Configuration of the autentication and authorization is done by Apache’s mod_authnz_ldap and (on Red Hat EL) configured in /etc/httpd/conf.d/subversion.conf (which exists after installing the subversion package with yum.

Basic configuration with htaccess
For simple authentication with Apache’s htaccess mechanism the config looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
   DAV svn
   SVNParentPath /var/www/svn
   SVNReposName "My company's repository"
 
   # Limit write permission to list of valid users.
   <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization Realm for SVN"
      AuthUserFile /etc/httpd/conf.d/svn_htpasswd
      Require valid-user
 
   </LimitExcept>
</Location>

After using htpasswd to create a file with usernames and passwords on the server users could commit to the repository.

Configuration for AD Global Catalog
The first LDAP-like construction I got working was when using the AD Global Catalog. Normal LDAP traffic uses port 389, but the AD’s Global Catalog uses port 3268. The username needed to commit with SVN is windows_logon_name@your_AD.suffix, the so-called userPrincipalName. Here, your_AD and suffix are the DC’s of the LDAP/AD tree. By using this userPrincipalName users from different DC trees can be authenticated. The configuration file looks this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
    DAV svn
    SVNParentPath /var/www/svn
    SVNReposName "My company's repository"
 
    # Limit write permission to list of valid users.
    <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization using your LDAP account"
      AuthBasicProvider ldap
      AuthzLDAPAuthoritative off
      # Active Directory requires an authenticating DN to access records
      AuthLDAPBindDN "svntest@your_AD.suffix"
 
      # This is the password for the AuthLDAPBindDN user in Active Directory
      AuthLDAPBindPassword "some_good_password"
 
      # The LDAP query URL
      AuthLDAPURL "ldap://ldap.your_company.com:3268/?userPrincipalName?sub"
      AuthUserFile /dev/null
 
      # Require a valid user
      Require valid-user
    </LimitExcept>
</Location>

With this configuration I could commit with this command: svn commit -m "First AD test" --username your_windows_username@your_AD.suffix.

Configuration for AD + Windows logon Name
As mentioned earlier, the previous method allows people from different parts of the AD tree to log in. In order to restrict access to for example a specific OU, the AuthLDAPURL has to be changed. In our case the LDAP tree is not a simple OU=Users,DC=our_company,DC=com, but consists of several nested OU structures. I used the adsiedit.msc snapin (ADSI editor) on the AD controller to find out the exact structure, since I needed to find out which parts were CNs and which where OUs.
In order to authenticate against a the windows login names in a certain sub-OU the AuthLDAPURL is

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1, DC=your_AD, DC=suffix?sAMAccountName?sub?(objectClass=*)"

Configuration for AD + Windows Display Name
If you want the users to use their common name (the Display Name in the AD) use:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?cn"

Users can now commit with: svn commit -m "Another AD test" --username "Firstname Lastname".

Configuration for AD + another field
In our case login authentication on the Linux/UNIX machines is not done through the AD. Furthermore, the user names are not synchronised between Linux and Windows. This poses a small inconvenience, since by default an svn commit uses the Linux username. As the AD doesn’t know about this name, the first authentication fails. subsequently Apache asks for the user name, and then the user can enter his Windows AD credentials (principle name, display name or windows login name, depending on which of the above configurations was chosen). So as a quick workaround (and just to see if I could get it to work) I entered my Linux user name into the Office field in the AD. In the ADSI Editor I found the real name of the field: physicalDeliveryOfficeName With the following AuthLDAPURL I could use the Office field to authenticate me:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?physicalDeliveryOfficeName"

Now a simple svn commit works.

Some useful links:

Related Images:

Script to tunnel RDP connections through stepping stone server using SSH

In order to connect to the servers at work we need to connect through a stepping stone host (via SSH). Since some of the servers are MS Windows machines which can be managed via the Remote Desktop Protocol (RDP), this traffic needs to be tunneled over SSH as well.
I wrote the following bash script to automate setting up the tunnel. It sets some default variables and then looks for an available port between 1234 and 1254 (chosen completely arbitrarily) and uses it for the tunnel. Then it uses the rdesktop program to start the RDP connection.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
#!/bin/bash
#
# This script makes an ssh tunnel to a stepping stone server
# and uses it to start an rdesktop connection to the machine 
# given as the first argument of the script. 
#
# (C) L.C. Karssen
# $Id: winremote.sh,v 1.14 2010/02/10 13:03:08 lennart Exp $
#
 
# User-configurable variables
ssh_username=your_steppingstone_username_here
steppingstone=steppingstone.your_company.com
rdesktop_username=your_windows_username_here
rdesktop_domain=your_windows_domain_here
rdesktop_opts="-z -g 1024x768 -a 16"
rdesktop_port=3389 # This is the standard MS rdesktop port
 
 
# For ordinary users it should not be necessary to change anything below this line. 
# Some functions:
usage()
{
    cat <<EOF
Usage:
    $program [options] rdesktop_hostname 
 
Make a remote desktop connection through an SSH tunnel.
 
Options: 
    -h, --help                                   print this help message
    -s, --steppingstone steppingstone_hostname   set other stepping stone host
                                                   (default: $steppingstone)
    -t, --timeout sec                            set timeout (default: 1)
    -v, --verbose                                verbose output
     --version                                   print version
 
Note that some customisations need to be made in the first few lines of this 
script (e.g. user names and other defaults)
EOF
}
 
program=`basename $0`
 
# Command line option parsing. Shift all options 
verbose=
timeout=1
 
while [ $# -gt 0 ]
do 
    case $1 in
	-v | --verbose | -d | --debug ) 
	    verbose=true
	    ;;
	--version )
	    echo '$Revision: 1.14 $'
	    exit 0
	    ;;
	-t | --timeout ) 
	    shift
	    timeout="$1"
	   if [ $timeout -lt 1 ]; then
	       timeout=1
	   fi
	   if [ $verbose ]; then
	       echo "Timeout set to $timeout"
	   fi
	   ;;
	-s | --steppingstone ) 
	   shift
	   steppingstone="$1"
	   if [ $verbose ]; then
	       echo "Steppingstone server is $steppingstone"
	   fi
	   ;;
	-h | --help ) 
	   usage
	   exit 0
	   ;;
	-*) 
	   echo "$0: invalid option $1" >&2
 	   usage
	   exit 1
	   ;;
	*) 
	   break
	   ;;
    esac
    shift
done
 
# Server name (as seen on the steppingstone) that we want to connect to:
rdesktop_server=$1 
 
################### Config done, let's get to work ########################
 
# Simple usage description
if [ "$rdesktop_server" == "" ]; then
    echo "Error: No rdesktop host given" >&2
    usage
    exit 1
fi
 
# Find a free port on the local machine that we can use to connect through
min_port=1234
max_port=1254
used_ports=(`netstat -tan | awk '{print $4}' | grep 127.0.0.1 | awk -F: '{print $2}' | sort -g`)
if [ $verbose ]; then
    echo "Used ports are: ${used_ports[@]}"
fi
 
# In the next line we first print the $used_ports as an array, but with 
# each port on a single line. This is then piped to an awk script that 
# puts all the values in an array and subsequently walks through all ports 
# from $min_port to $max_port in order to find the first port that is not 
# in the array. This port is printed.
local_port=`printf "%i\n" ${used_ports[@]} | \
    awk -v minp=$min_port -v maxp=$max_port \
    '{ array[$1]=1 } END { for (i=minp; i<=maxp; i++) { if (i in array) continue; else { print i; break } } }'`
if [ "$local_port" == "" ]; then
    echo "Error: No ports free! Exiting..." >&2
    exit 2
fi
if [ $verbose ]; then
    echo "Selected port was: $local_port"
fi
 
# Create tunnel:
if [ $verbose ]; then
    echo "Creating SSH tunnel..."
fi
ssh_opts="-f -N -L"
ssh $ssh_opts $local_port:$rdesktop_server:$rdesktop_port \
    $ssh_username@$steppingstone 
 
# Allow the ssh tunnel to be established
sleep $timeout
 
# Abort if tunnel has not been established
pidof_ssh=`pgrep -f "ssh $ssh_opts $local_port"`
if [ "$pidof_ssh" == "" ]; then
    echo "Error: Timeout while establishing tunnel" >&2
    echo "Exiting..."
    exit 3
fi
 
# Make rdesktop connection
if [ $verbose ]; then
    echo "Opening Remote desktop connection to $rdesktop_server..."
fi
rdesktop $rdesktop_opts -u $rdesktop_username -p - \
    -d $rdesktop_domain localhost:$local_port
 
# Clean up tunnel
if [ $verbose ]; then
    echo "Cleaning up SSH tunnel with pid $pidof_ssh and local port $local_port"
fi
kill $pidof_ssh

Related Images:

Cloning Ubuntu virtual machines: some problems (and solutions)

Yesterday I set up a KVM virtual machine on my new Ubuntu 9.10 server. The VM also ran Ubuntu 9.10 server. In order to do some performance tests (what would be the speed up of having the VM’s disks on an LVM LV on the host, compared to having them in a file on the host) I used virt-clone to clone the machine:

virt-clone --connect=qemu:///system -o testldap -n testldap-lvm -f testldap-lvm/ubuntu-kvm/disk0.img

This clones the VM named testldap to testldap-lvm and put its disk file in the subdirectory testldap-lvm/ubuntu-kvm/. After that I still had to convert this image file to it’s location in an LV, but that’s not what this post is about.

As the machine is cloned, the MAC address of its virtual NIC is also changed. The ‘source’ VM had 52:54:00:f2:cc:40, the new VM was given 00:16:36:46:34:42. As I booted the new VM I noticed it wouldn’t come up as expected. I couldn’t reach it via the fixed IP that I had given the source VM (even though the source VM was shut down, of course). Closer inspection revealed that the interface name for the NIC in the new VM had changed. I vaguely remembered that Debian-derived distro’s do that: because they don’t want NIC name assignments (eth0, eth1, etc.) to change if a new network adapter is added, they tie a name to a MAC address. And, as noted, the MAC address had indeed changed in the cloning process.

The assignments between MAC and eth? name are recorded in the file /etc/udev/rules.d/70-persistent-net.rules. They are set by the script /lib/udev/write_net_rules, so I removed the execute permissions on that file. However, this was not a clean solution, since it resulted in an error on start up. I found that editing /lib/udev/rules.d/75-persistent-net-generator.rules is a far better solution. Adding the lines

# ignore KVM virtual interfaces
ENV{MATCHADDR}=="52:54:00:*", GOTO="persistent_net_generator_end"
# This seems to be the range used by Xen, but also by virt-clone
ENV{MATCHADDR}=="00:16:36:*", GOTO="persistent_net_generator_end"

seems to do the trick (don’t forget to remove the rules already added in /etc/udev/rules.d/70-persistent-net.rules). Make sure to add them after the lines

# read MAC address
ENV{MATCHADDR}="$attr{address}"

so that the variable MATCHADDR has a value. I documented this solution in the Ubuntu bug report that seemed the most appropriate as well.

This solved one problem. Then the next problem reared its ugly head: Both the source VM and the clone refused to finish their boot process, they kept hanging on the NFS mounts defined in /etc/fstab. The only option mountall gave was to enter the root password (after pressing ESC) or type Crtl-D to continue. Doing the latter resulted in nothing but an infinite wait. In an Ubuntu bug report I found that using DHCP for the network interface would solve the problem. And, indeed it did. However, since I want static IP addresses for my servers this was not a solution that I liked. Much to my surprise the NFS mounts worked perfectly after changing the interface (in /etc/network/interfaces) back to static. I don’t know why, but on both VMs I set the configuration for eth0 from static to dhcp, rebooted, changed it back to static and rebooted again to find the problem solved… Strange!

Update 2009-12-18:
As it turns out, the solution to the mount problem doesn’t always work. I tried it again, but now it failed to work after switchting back from DHCP to a static IP. I guess it has something to do with the lease time of the IP, because in the case I described above there was a night between using the DHCP IP and turning static back on. So somewhere, something needs to time out before switching back from DHCP to static IPs works again.

Related Images:

Newer posts »

© 2024 Lennart's weblog

Theme by Anders NorĂ©nUp ↑