Notes about open source software, computers, other stuff.

Tag: Linux (Page 7 of 8)

Installing and configuring Puppet

Puppet is a configuration management system. In short this means that by setting up a server (the Puppet master) you can manage many other machines (nodes) with this server by specifying which packages should be installed, files that need to be present, their permissions, etc. The nodes poll the server every 30 minutes (by default) to see if they should apply any changes to their configuration. Other packages that implement a similar idea are CfEnfine and Chef.

Note that all these instructions were performed as root.

The puppet master

Gaffel will be puppet master. I’ve added a DNS entry for puppet.karssen.org that points to gaffel. This installs the client and the Puppet master:

$ aptitude install puppet puppetmaster

The main configuration of server and client can be found in /etc/puppet/puppet.conf. We’ll leave it at the default for now. The file /etc/puppet/manifests/site.pp contains options that apply to the whole site. Let’s make it and add the following contents:

import "nodes"
 
# The filebucket is for backups. Originals of files that Puppet modifies
# get stored here.
filebucket { main: server => 'puppet.karssen.org' }
File  { backup => main}
 
# Set the default $PATH$ for executing commands on node systems.
Exec { path => "/usr/bin:/usr/sbin:/bin:/sbin:" }

The file /etc/puppet/manifests/nodes.pp defines the nodes/clients that will be managed by puppet as well as what configuration will be applied to them, so-called roles. For now, let’s make a quick example:

node common {
	include packages
}
 
node lambik inherits common {
	include ntp::client
}

Both the ‘packages’ and the ‘ntp’ modules still need to be defined. Let’s do that now.

Modules are collections of puppet code (known as manifests) and related files that are used for client configuration. Modules are stored in /etc/puppet/modules/.
Let’s start with the ntp example. First make the necessary directory structure:

$ mkdir -p /etc/puppet/modules/ntp/{manifests,files,templates}

Every modules needs a file init.pp that declares the class. It can also include other files. The files and templates directories are used to store files that need to be copied to the node or templates to make such files, respectively. We’ll come across some examples of both. This is the init.pp file for the ntp role (/etc/puppet/modules/ntp/manifests/init.pp):

class ntp::client {
	 package { "ntp":
    		 ensure => installed,
	 }
 
	 service { "ntp_client":
		 name       => "ntp"
    		 ensure     => running,
#		 hasstatus  => true,
		 hasrestart => true,
		 require    => Package["ntp"],
	 }
}

Here we indicate that the NTP service must be running and that it’s init script (in /etc/init.d) accepts the status and restart options. Lastly in the require line we note that before this manifest can be applied we must make sure that the package ntp has been installed. This is necessary, because the order in which the two directives are executed is not necessarily the order in
which they appear in the manifest.

The # in from of the hasstatus attribute is because of a bug inthe puppet version (2.6.4) shipped with Ubuntu 11.04. See http://projects.puppetlabs.com/issues/5610 for the bug report. In version 2.6.7 it is supposedly fixed.

In our nodes.pp file we also mentioned a packages class. In this class we list all the packages that we want to have installed on the node. Let’s make the packages module. First create the necessary directories:

$ mkdir -p /etc/puppet/modules/packages/{manifests,files,templates}

Add the file /etc/puppet/modules/packages/manifests/init.pp:

class packages {
	 $base_packages = [
	 "openssh-server",
	 "nfs-common",
	 "etckeeper",
	 "htop",
	 "iotop",
	 "iftop",
	 ]
 
	 $editor_packages = [
	 "emacs",
	 "emacs-goodies-el",
	 "elscreen",
	 ]
 
	 $all_packages = [
	 $base_packages,
	 $editor_packages,
	 ]
 
	 package { $all_packages:
	      ensure => installed,
	 }
}

Here I’ve defined three variables (beginning with a $ sign), one for base packages, one for editor-related packages and one called $all_packages that incorporates them both. Finally, I tell puppet to ensure they are all installed.

Setting up a client

As a test client I’m using lambik, one of my MythTV frontends.

$ aptitude install puppet

To make sure that puppet starts by default on system startup edit the file /etc/default/puppet and set START to yes:

# Defaults for puppet - sourced by /etc/init.d/puppet
 
# Start puppet on boot?
START=yes
 
# Startup options
DAEMON_OPTS=""

Now edit /etc/puppet/puppet.conf (on the client) and add the FQDN of the puppet master server to the [main] section:

[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
server = puppet.karssen.org
 
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY

Setting up secure communication between master and nodes and first test run

Puppet uses SSL certificates to set up a secure connection between master and nodes. Before you can apply any changes to the client, certificates need to be exchanged and signed. First, tell the client to connect to the puppet master:

$ puppetd --test
info: Creating a new SSL key for lambik.karssen.org
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for lambik.karssen.org
info: Certificate Request fingerprint (md5): 1D:A3:3A:4A:A6:DA:D6:C8:96:F4:D4:7E:52:F4:12:1D
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled

On the puppet master we can now sign the certificate:

$ puppetca -l
lambik.karssen.org
$ puppetca -s lambik.karssen.org
notice: Signed certificate request for lambik.karssen.org
notice: Removing file Puppet::SSL::CertificateRequest lambik.karssen.org at '/var/lib/puppet/ssl/ca/requests/lambik.karssen.org.pem'

On the client we can now rerun puppetd:

root@lambik:~# puppetd --test
info: Caching catalog for lambik.karssen.org
info: Applying configuration version '1311930908'
notice: /Stage[main]/Packages/Package[iotop]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Packages/Package[iftop]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Ntp/Package[ntp]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Packages/Package[emacs-goodies-el]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Packages/Package[htop]/ensure: ensure changed 'purged' to 'present'
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished catalog run in 78.43 seconds

If all went well, we can now start the puppet client daemon to keep our system under puppet control:

$ service puppet start

Adding (configuration) files to the roles

Since I run my own NTP server (ntp.karssen.org, only accessible from inside my LAN), the NTP configuration file (/etc/ntp.conf) must be changed. Of course, we want Puppet to take care of this. The ntp.conf file I want to distribute to all nodes has the following contents (note that the only change is the name of the server and commenting the restrict lines):

# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
 
driftfile /var/lib/ntp/ntp.drift
 
 
# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/
 
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
 
# Specify one or more NTP servers.
 
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
server ntp.karssen.org
 
# Use Ubuntu's ntp server as a fallback.
server ntp.ubuntu.com
 
# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrict
ions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.
 
# By default, exchange time with everybody, but don't allow configuration.
#restrict -4 default kod notrap nomodify nopeer noquery
#restrict -6 default kod notrap nomodify nopeer noquery
 
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1
 
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
 
 
# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255
 
# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines.  Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient

Save this file in /etc/puppet/modules/ntp/files (on the puppet master). Now edit the manifest for the ntp role (/etc/puppet/modules/ntp/manifest/init.pp) to add the file section and a subscribe command:

class ntp::client {
	 package { "ntp":
		      ensure => installed,
	 }
 
	 service { "ntp_client":
	      name       => "ntp",
	      ensure     => running,
#	      hasstatus  => true,
	      hasrestart => true,
	      require    => Package["ntp"],
	      subscribe  => File["ntp_client_config"],
	 }
 
	 file { "ntp_client_config":
		   path => "/etc/ntp.conf",
	   owner   => root,
	   group   => root,
	   mode    => 644,
	   source  => "puppet:///ntp/ntp.conf",
	   require => Package["ntp"],
	 }
}

The URL specified in the source line automatically looks in the right place (as mentioned just above) for the file. Because we don’t want to wait for puppet to automatically pass on this configuration, let’s run it by hand:

root@lambik:~# puppetd --test
info: Caching catalog for lambik.karssen.org
info: Applying configuration version '1311936811'
--- /etc/ntp.conf	2011-06-17 07:59:54.000000000 +0200
+++ /tmp/puppet-file20110729-12128-1h3fupz-0	2011-07-29 12:53:33.279622938 +0200
@@ -16,16 +16,14 @@
 # Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
 # on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
 # more information.
-server 0.ubuntu.pool.ntp.org
-server 1.ubuntu.pool.ntp.org
-server 2.ubuntu.pool.ntp.org
-server 3.ubuntu.pool.ntp.org
+server ntp.karssen.org
 
 # Use Ubuntu's ntp server as a fallback.
 server ntp.ubuntu.com
 
@@ -33,8 +31,8 @@
 # up blocking replies from your own upstream servers.
 
 # By default, exchange time with everybody, but don't allow configuration.
-restrict -4 default kod notrap nomodify nopeer noquery
-restrict -6 default kod notrap nomodify nopeer noquery
+#restrict -4 default kod notrap nomodify nopeer noquery
+#restrict -6 default kod notrap nomodify nopeer noquery
 
 # Local users may interrogate the ntp server more closely.
 restrict 127.0.0.1
info: FileBucket adding /etc/ntp.conf as {md5}32280703a4ba7aa1148c48895097ed07
info: /Stage[main]/Ntp::Client/File[ntp_client_config]: Filebucketed /etc/ntp.conf to main with sum 32280703a4ba7aa1148c48895097ed07
notice: /Stage[main]/Ntp::Client/File[ntp_client_config]/content: content changed '{md5}32280703a4ba7aa1148c48895097ed07' to '{md5}0d1b81c95bab1f6b08eb27dfaeb18bb5'
info: /Stage[main]/Ntp::Client/File[ntp_client_config]: Scheduling refresh of Service[ntp_client]
notice: /Stage[main]/Ntp::Client/Service[ntp_client]: Triggered 'refresh' from 1 events
notice: Finished catalog run in 3.06 seconds

Setting NFS mounts in /etc/fstab

On my clients I want to mount several NFS shares. Let’s create the directories for the nfs_mounts module (on the puppet master of course):

$ mkdir -p /etc/puppet/modules/nfs_mounts/{manifests,files,templates}

Next, let’s edit the manifest (/etc/puppet/modules/nfs_mounts/manifests/init.pp):

class nfs_mounts {
	 # Create the shared folder unless it already exists
	 exec { "/bin/mkdir -p /var/sharedtmp/":
		   unless => "/usr/bin/test -d /var/sharedtmp/",
	 }
 
	 mount { "/var/sharedtmp/":
	    atboot  => true,
	    ensure  => mounted,
	    device  => "nfs.karssen.org:/var/sharedtmp",
	    fstype  => "nfs",
	    options => "vers=3",
	    require => Package["nfs-common"],
	 }
}

This should make the /var/sharedtmp directory and mount it. Note that I mention the nfs_common package in a require line. This package was defined in the packages module (in the $base_packages variable. Now let’s add this module to the nodes.pp file:

node common {
  include packages
}
 
node lambik inherits common {
	include ntp::client
	include nfs_mounts
}

Since I’ve got more than a single NFS mount, let’s extend the previous example and use a defined resource. Change the file /etc/puppet/modules/nfs_mounts/manifests/init.pp as follows:

define nfs_mount(
	  $location,
	  $server  = "nfs.karssen.org",
	  $options = "vers=3",
	  $fstype  = "nfs"
) {
  file {"$location":
	  ensure => directory,
  }
 
  mount { "$location":
  	atboot  => true,
	ensure  => mounted,
	device  => "${server}:${location}",
	fstype  => "$fstype",
	options => "$options",
	require => [ Package["nfs-common"], File["$location"] ],
  }
}
 
class nfs_mounts {
 
			 nfs_mount { "/home":
		 	   location => "/home",
		 }
 
			 nfs_mount { "/var/sharedtmp":
		 	    location => "/var/sharedtmp",
 		 }
 
			 nfs_mount { "/var/video":
	 	    	    location => "/var/video",
 		 }
 
			 nfs_mount { "/var/music":
	 	    	    location => "/var/music",
 		 }
}

Here we first define a resource called nfs_mount, which can accept various parameters, all of which have a default value, except $location. Secondly we ensure that this location is a directory and then we define how it should be mounted. In the subsequent class definition we use this nfs_mount resource several times to mount the various NFS shares.
Note that it would have been easier if the definition of nfs_mount would have started with

define nfs_mount (
	  $location = $name,

because then the invocations of nfs_mount in the class would not
need the location => line. Unfortunately this doesn’t work. It’s
a known bug that has been fixed in version 2.6.5
(http://projects.puppetlabs.com/issues/5061).

Links

Related Images:

Making a .deb package for software that doesn’t accept the DESTDIR variable in its Makefile

Because I’ll be deploying a new server in the near future and I want to keep it as clean as possible I decided (again) to try to find out how to create a .deb package (as used for example by Debian and Ubuntu Linux) for some software that doesn’t follow the autotools way of doing things. This time I found a/the way. But first some background info.

In the Unix/Linux world many programs are compiled from source in three steps:

./configure
make
make install

Usually the necessary files for this have been created using the autotools. The goal of the first step is to create a so-called Makefile that contains instructions on how to compile and install the files (as done in the two subsequent make steps.

Some software packages, however, include a ready-made Makefile that, in addition, doesn’t accept the environment variable DESTDIR. This last point is what makes packaging the application into a .deb file a bit tricky. The reason for this is that the package build scripts want to install the files of your application in a temporary directory and not into system-wide directories like /usr/bin/ etc. during the packing process. As such, packaging does not require root privileges.

At work we use many programs and tool sets developed by ourselves and other scientists. I know from my own experience that setting up autotools for your program is not trivial. Actually, for lack of time I’ve never successfully done it and for most of the rather simple programs that I’ve written setting up a complete autoconf/automake environment seems a bit overkill. I usually ended up writing a simple Makefile that compiles to code and installs it (usually in /usr/local/bin).

Merlin by Abecasis et al. is a great piece of software developed at the University of Michigan. However, as you may have expected by now, its Makefile does not accept the DESTDIR variable, instead running make tells you that in order to install in a different directory you’ll have to run

make INSTALLDIR=/some/other/directory

Therefore, all quick and dirty .deb recipes one finds on the Internet do not work without some adaptations. So here is what I did to make a .deb of it. It won’t be a full tutorial on how to do packaging, see the references at the end of this post for that. I’ll assume here that you have your build environment set up (e.g. the build-essential and fakeroot packages, as well as some others).

tar -xzf merlin-1.1.2.tar.gz
cd merlin-1.1.2
dh_make --single --email youremail@address --file ../merlin-1.1.2.tar.gz

Now the basic files are ready. Apart from the untarred source files the files needed for Debian packaging have also been created (in merlin-1.1.2/debian).

Time to make the necessary changes. First, since the Makefile included with merlin does not accept the DESTDIR variable that the Debian packaging system uses we’ll patch the Makefile in such a way that it works (I tried to fix this in the debian/control file, but in the end adapting the Makefile was much easier). I do this by changing the line

INSTALLDIR=/usr/local/bin

to

# default installation directory
ifeq ($(DESTDIR),"")
    INSTALLDIR=/usr/local/bin/
else
    INSTALLDIR=$(DESTDIR)/usr/bin/
endif

Let’s do some polishing of the package. I don’t want to make the perfect package, but adding a bit of text to the debian/control file make a lot of difference. This is what it looked like after my edits:

Source: merlin
Section: science
Priority: extra
Maintainer: Lennart C. Karssen <youremail@address>
Build-Depends: debhelper (>= 7)
Standards-Version: 3.8.3
Homepage: http://www.sph.umich.edu/csg/abecasis/merlin/index.html
 
Package: merlin
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Package for fast pedigree analysis
 MERLIN uses sparse trees to represent gene flow in pedigrees
 and is one of the fastest pedigree analysis packages around
 (Abecasis et al, 2002).

Also editing the file debian/changelog is a good idea, especially since I changed the source code (remember the Makefile?). This is what I wrote:

merlin (1.1.2-1) unstable; urgency=low
 
  * Initial release
  * Adjusted Makefile to make DESTDIR work.
 
 -- Lennart C. Karssen <youremail@address>  Tue, 05 Apr 2011 12:04:21 +0200

Officially you should edit the debian/copyright file as well, but since the merlin licence doesn’t allow distribution of the source or the binaries I didn’t bother.

To finally build the package run

dpkg-buildpackage -rfakeroot -us -uc

This creates a .deb file in the directory where you started. As a final touch you can check your package for errors with

lintian ../merlin_1.1.2-1_amd64.deb

References:

Related Images:

Using rsync to backup to a remote Synology Diskstation

An updated version of the script can be found here.

I recently bought a NAS, a Synology DiskStation DS211j and stuffed two 1TB disks in it. I configured the disks to be in RAID 1 (mirrored) in case one of them decides to die. I then brought the NAS to a family member’s house and installed it there. Now she uses it to back up her important files (and as a storage tank for music and videos).

The good thing for me is that I can now make off-site backups of my home directories. I configured the DS211j to accept SSH connections so that I can log into it (as user admin or root). I used the web interface to create a directory for my backups (which appeared to be /volume1/BackupLennart after logging in with SSH).

After making a hole in her firewall that allowed me to connect to the DS211j, I created a backup script in /etc/cron.daily with the following contents:

#!/bin/bash
#
# This script makes a backup of my home dirs to a Synology DiskStation at
# another location. I use LVM for my /home, so I make a snapshot first and
# backup from there.
#
# Time-stamp: <2011-02-06 21:30:14 (lennart)>
 
###############################
# Some settings
###############################
 
# LVM options
VG=raidvg01
LV=home
MNTDIR=/mnt/home_rsync_snapshot/
 
# rsync options
DEST=root@remote-machine.example.com:/volume1/BackupLennart/
SRC=${MNTDIR}/*
OPTIONS="-e ssh --delete --progress -azvhHS --numeric-ids --delete-excluded "
EXCLUSIONS="--exclude lost+found --exclude .thumbnails --exclude .gvfs --exclude .cache --exclude Cache"
 
 
 
###############################
# The real work
###############################
 
# Create the LVM snapshot
if [ -d $MNTDIR ]; then
    # If the snapshot directory exists, another backup process may be
    # running
    echo "$MNTDIR already exists! Another backup still running?"
    exit -1
else
    # Let's make snapshots
    mkdir -p $MNTDIR
    lvcreate -L5G -s -n snap$LV /dev/$VG/$LV
    mount /dev/$VG/snap$LV $MNTDIR
fi
 
 
# Do the actual backup
rsync $OPTIONS $EXCLUSIONS $SRC $DEST
 
# Remove the LVM snapshot
if [ -d $MNTDIR ]; then
    umount /dev/$VG/snap$LV
    lvremove -f /dev/$VG/snap$LV
    rmdir $MNTDIR
else
    echo "$MNTDIR does not exist!"
    exit -1
fi

Let’s walk through it: in the first section I configure several variables. Since I use LVM on my server, I can use it to make a snapshot of my /home partition. The LVM volume group I use is called ‘raidvg01’. Withing that VG my /home partition resides in a logical volume called ‘home’. The variable MNTDIR is the place where I mount the LVM snapshot of ‘home’.

The rsync options are quite straight forward. Check the rsync man page to find out what they mean. Note that I used the --numeric-ids option because the DS211j doesn’t have the same users as my server and this way all ownerships will still be correct if I ever need to restore from this backup.

In the section called “The real work” I first create the MNTDIR directory. Subsequently I create the LVM snapshot and mount it. After this the rsync backup can be run and finally I unmount the snapshot and remove it, followed by the removal of the MNTDIR.

Since the script is placed in /etc/cron.daily it will be executed every day. Since we use SSH to connect to the remote DS211j I set up SSH key access without a password. This Debian howto will tell you how to set that up.

The only thing missing in this setup is that the backups are not stored in an encrypted form on the remote NAS, but for now this is good enough. I can’t wait until the network bandwidth on both sides of this backup connection get so fast (and affordable) that I can easily sync my music as well. Right now uploads are so slow that I hardly dare to include those. I know that I shouldn’t complain since the Netherlands has one of the highest broadband penetrations in the world, but, hey, don’t you just always want a little more, just like Oliver Twist?

Related Images:

Recompiling the quota package in CentOS so that it can use LDAP to find email adresses

Today I compiled my first RPM package from source :-)! But let’s start at the beginning…

At work I recently implemented disk quota on our server. While trying to setup /etc/warnquota.conf I noticed the example lines at the bottom that showed how to configure warnquota to look up e-mail addresses in an LDAP directory. This was exactly what I needed, because we store our user’s e-mail address in our LDAP tree. Without this feature warnquota would try to send its warning mails to user@our-server.example.com instead of (or even other addresses for guests that only visit us for a few weeks). The lines in /etc/warnquota.conf were:

LDAP_MAIL = true
LDAP_HOST = ldap.example.com
LDAP_PORT = 389
LDAP_BASEDN = ou=Users,dc=example,dc=com
LDAP_SEARCH_ATTRIBUTE = uid
LDAP_MAIL_ATTRIBUTE = mail
LDAP_DEFAULT_MAIL_DOMAIN = example.com

So, after saving the file I tested it by running warnquota -s (as root, and I also made sure I reduced my own quota so I would be the one getting an e-mail warning).

Unfortunately warnquota spitted out some errors:

warnquota: Error in config file (line 65), ignoring
warnquota: Error in config file (line 66), ignoring
warnquota: Error in config file (line 67), ignoring
warnquota: Error in config file (line 68), ignoring
warnquota: Error in config file (line 69), ignoring
warnquota: Error in config file (line 70), ignoring
warnquota: Warning: Mailer exitted abnormally.

These were the line numbers with the LDAP options above :-(. Google pointed me to an old bug in Fedora that was marked as resolved. I also found out that the quota tools should be compiled with LDAP support for this to work. To be sure that it was actually possible I configured warnquota on my home server that runs Ubuntu 10.04 and also uses LDAP. There, it all worked as expected.

So, my next step was clear: make my own RPM package for quota. The one installed by CentOS 5.4 is quota-3.13-1.2.5.el5. These are the steps I took:

  • Enable the CentOS source repository by creating the file etc/yum.repos.d/CentOS-Source.repo with this contents:
    [centos-src]
    name=CentOS $releasever - $basearch - Source
    baseurl=http://mirror.centos.org/centos/$releasever/os/SRPMS/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

    The run yum update and check that the new repository is listed.

  • Install the yum-utils and rpmdevtools packages: sudo yum install yum-utils rpmdevtools.
  • Set up a directory to do your build work in. I created the directory ~/tmp/pkgtest.
  • Run rpmdev-setuptree to create the required sub directories.
  • Set the basic build configuration by creating the file ~/.rpmmacros with the following contents:
    # Path to top of build area
    %_topdir    /home/lennart/tmp/pkgtest
  • Go into the SRPMS directory and download the source package: yumdownloader --source quota
  • In the top level directory run rpm -i SRPMS/quota-3.13-1.2.5.el5.src.rpm to unpack the package.
  • The SPECS directory now contains the .spec file that contains the build instructions. The SOURCES directory contains the source files and patches from Red Hat. In a temporary directory I untar-ed the quotatools source tar.gz file and ran ./configure --help to find out which option I should add to the spec file in order to enable LDAP lookup. The option was: --enable-ldapmail=yes. The set of configure lines in the spec file now looked like this:
    %build
    %configure \
    	--with-ext2direct=no --enable-rootsbin --enable-ldapmail=yes
    make

    In the spec file I also added a changelog entry:

    * Mon Oct 18 2010 Lennart Karssen <lennart@karssen.org 1:3.13-1.2.6
    - Added --enable-ldapmail=try to the ./configure line to enable LDAP
      for looking up mail addresses. (Resolves Red Hat Bugzilla 133207,
      it is marked as resolved there, but apparently was reintroduced.)

    And I also bumped the build version number at the top of the file (the Release: line). Finally, I added openldap-devel to the BuildPreReq line (of course I ran into a compilation error first and then installed the openldap-devel package :-)).

  • Now it’s time to build the package. In the SPEC directory run: rpmbuild -bb quota.spec and wait. The RPM package is created in the RPMS directory.
  • Install the package: sudo rpm -Uvh RPMS/x86_64/quota-3.13-1.2.6.x86_64.rpm (if you didn’t bump the package version number the --replacepkgs must be added to ‘upgrade’ to the same version).

And that was it! The package installed cleanly and a test run of warnquota -s was successful.

Related Images:

Nagios event handlers for services on remote machines

Part of my work consists of managing the servers on which we do our data analysis. At the moment we’ve got two servers and one virtual machine running. The VM is used as a management server, it runs things like Nagios, Cacti, Subversion, etc.

Today I implemented Nagios event handlers in this setup. The idea behind an event handler is the following: If e.g. a service goes down, Nagios should try to solve this problem itself before notifying the administrator (me). It should, in this case, simply try to restart the service.

The Nagios documentation [1] describes how to do this for a service that runs on the same machine as the Nagios service. In my case, however, the services are running on the to real servers. To me it seemed logical to use NRPE to execute the necessary commands on the remote hosts (since NRPE was already running on those machines anyway).
In order to adapt the scheme from the Nagios docs to work on remote servers as well three things need to be done:

  • The command that is executed by the event handler script should be changed to use NRPE
  • On the remote machine the nagios user (under which the NRPE service is running) should be given some sudo rights so that it is actually allowed to start a service.
  • The NRPE configuration on the remote machine should of course be changed to include the new command(s) for starting services.

So here we go! First, the Nagios configuration on the management host. In the service definition file I added one line for the event handler to each service. The definition of one service now looks like this (the last line was added):

define service {
       use                      generic-service
       hostgroup_name           sge-exec-servers
       service_description      SGE execd
       check_command            check_nrpe_1arg!check_sge_execd
       notification_interval    0 ; set > 0 if you want to be renotified
       event_handler            restart-service!sge-execd
}

Next, the restart-service command must be defined. I did that in a file that I called /etc/nagios3/conf.d/event-handlers.cfg:

define command {
       command_name     restart-service
       command_line     /etc/nagios3/conf.d/event_handler_script.sh $SERVICESTATE$ $SERVICESTATETYPE $ $SERVICEATTEMPT$ $HOSTADDRESS$ $ARG1$ $SERVICEDESC$
}

The variable $ARG1$ here is the name of the service that needs to be restarted. In this example it is sge-execd from the event_handler line in the service definition. The $HOSTADDRESS will be used in the event handler script to send the right host name to NRPE.
The event_handler_script.sh referenced here is almost identical to the one in the Nagios documentation. As mentioned in the plan above, I changed it slightly so that it uses NRPE.

#!/bin/sh                                                                                            
#
# Event handler script for restarting the nrpe server on the local machine
# Taken from the Nagios documentation and
# http://www.techadre.com/sites/techadre.com/files/event_handler_script_0.txt
# Adapted by L.C. Karssen
# Time-stamp: <2010-09-14 15:24:33 (root)>
#
# Note: This script will only restart the nrpe server if the service is
#       retried 3 times (in a "soft" state) or if the web service somehow
#       manages to fall into a "hard" error state.
#
 
date=`date`
 
# What state is the NRPE service in?
case "$1" in
OK)
        # The service just came back up, so don't do anything...
        ;;
WARNING)
        # We don't really care about warning states, since the service is probably still running...
        ;;
UNKNOWN)
        # We don't know what might be causing an unknown error, so don't do anything...
        ;;
CRITICAL)
        # Aha!  The BLAH service appears to have a problem - perhaps we should restart the server...
 
        # Is this a "soft" or a "hard" state?
        case "$2" in
 
        # We're in a "soft" state, meaning that Nagios is in the middle of retrying the
        # check before it turns into a "hard" state and contacts get notified...
        SOFT)
                # What check attempt are we on?  We don't want to restart the web server on the firs\
t
                # check, because it may just be a fluke!
                case "$3" in
 
                # Wait until the check has been tried 3 times before restarting the web server.
                # If the check fails on the 4th time (after we restart the web server), the state
                # type will turn to "hard" and contacts will be notified of the problem.
                # Hopefully this will restart the web server successfully, so the 4th check will
                # result in a "soft" recovery.  If that happens no one gets notified because we
                # fixed the problem!
                3)
                        echo -n "Restarting service $6 (3rd soft critical state)...\n"
                        # Call NRPE to restart the service on the remote machine
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        echo "$date - restart $6 - SOFT"  >> /tmp/eventhandlers
                        ;;
                        esac
                ;;
 
        # The service somehow managed to turn into a hard error without getting fixed.
        # It should have been restarted by the code above, but for some reason it didn't.
        # Let's give it one last try, shall we?
        # Note: Contacts have already been notified of a problem with the service at this
        # point (unless you disabled notifications for this service)
        HARD)
                case "$3" in
 
                4)
                        echo -n "Restarting $6 service...\n"
                        # Call the init script to restart the NRPE server
                        echo "$date - restart $6 - HARD"  >> /tmp/eventhandlers
                        /usr/lib/nagios/plugins/check_nrpe -H $4 -c restart-$5
                        ;;
                        esac
                ;;
        esac
        ;;
esac
exit 0

Now Nagios can be restarted and should continue its work as usual. Time to make the changes on the remote hosts.

First, we’ll grant the necessary sudo rights to the nagios user. Run visudo and add these lines:

## Allow NRPE to restart sevices
User_Alias NAGIOS = nagios,nagcmd
Cmnd_Alias NAGIOSCOMMANDS = /usr/sbin/service
Defaults:NAGIOS !requiretty
NAGIOS    ALL=(ALL)    NOPASSWD: NAGIOSCOMMANDS

And finally add the required lines in the NRPE config file (/etc/nagios/nrep.cfg):

command[restart-sge-execd]=/usr/bin/sudo /usr/sbin/service gridengine-exec start

Restart the NRPE daemon and it should all work. Test it by manually stopping the service.

[1] Nagios documentation on Event Handlers
[2] Two blog posts that describe a similar set up. I used these as a starting point for my own set up.

Related Images:

Lenovo Thinkpad X100e and Ubuntu 10.04

About a month ago I bought a Lenovo Thinkpad X100e laptop. Well, maybe laptop is a bit too big a word for it. Size-wise it’s more like a netbook with its screen diagonal of 11.6″. Performance-wise however, it’s much better. The one I’ve got has an AMD Turion Neo X2 L625 dual core processor running at a maximum of 1.6GHz and 2GB of RAM. It’s a nifty little machine that serves my needs: doing some work on the train to and from work, or while being on conferences.

I took quite some time to look around for a laptop like this, and this Thinkpad seems to be the only one that satisfies my minimum requirements:
– Matte screen; no glossy screens for me, I’ve already got a mirror in my bathroom :-).
– Trackpoint; yep, that’s the red dot in between the G, H, and B keys.
– A processor that was more powerful than Intel’s Atom
– A decent keyboard, because for me, using Linux means using the command line and Emacs a lot.

After several weeks of use I’ve found only one drawback to this machine: it’s processor is not that efficient. It uses quite some power and therefore gets a bit hot. As a result the fan runs a lot (even though it’s not that audible) and battery life is not too good. I’m getting approximately 2 to 3 hours out of it if I reduce the screen brightness and turn wifi off. That could have been better (maybe Lenovo should have used an Intel CULV processor?), but it’s not too much of a limitation. But this came at no surprise, most reviews on the web mention it.

After opening the box I quickly made an image of the Windows partitions that were on it and then proceeded to install Ubuntu 10.04 on it. Most of the hardware was recognised by the 2.6.32 kernel included with Ubuntu’s 10.04 release. However, as several blogs (see links below) pointed out there are a few bumps, e.g. with suspend and resume, or the wireless chip that is able to connect, but doesn’t want to send or receive data. The bumps were smoothed out by installing a newer kernel (2.6.35-12-generic) from the Ubuntu kernel PPA. The 2.6.35 kernel is the one that will be used in the next Ubuntu release and the PPA contains packages that make this kernel run in the present release as well. With that kernel, suspend and hibernate run well, as well as most Fn function keys. In fact, the only one that doesn’t seem to work is Fn+F3 for microphone mute. I had to turn on the bluetooth module in Windows before it showed up in Ubuntu (as noted by several blogs). At the moment, the things that don’t work correctly are:
– The microphone doesn’t record (neither in the sound recorder, nor when using Skype). Sometimes it shows some activity if the mic-volume slider is moved to about 25%, but I couldn’t get that to work reliably.
– The combined mic/headphone jack doesn’t mute the speakers if a pair headphones is plugged in (neither is any sound heard through the headphones).
Maybe a newer ALSA release in the upcomming Ubuntu 10.10 will remedy these problems.

I was pleasantly surprised by the fact that using the open source radeon driver (installed by default) for the AMD/ATI graphics card worked out of the box, including Compiz 3D desktop fancy stuff. The VGA out also worked perfectly when I hooked it up to my Sony Bravia TV. Xorg’s RandR detected it and I could choose between an extended desktop or a clone setup.

As I already mentioned, I’m a trackpoint user, so I wanted to disable the touchpad, especially since the two buttons for it are located at the front edge of the laptop and are easily pressed when the device sits on your lap and you’ve got your knees pulled up.
Secondly I enabled wheel emulation for the trackpoint. Now, if I click and hold the middle ‘mouse’ button and push the trackpoint in a certain direction it acts as a scroll wheel. To achieve this I created the file /usr/lib/X11/xorg.conf.d/20-thinkpad.conf (EDIT: for Ubuntu 10.10 this file should be located in /usr/share/X11/xorf.conf.d/) with the following contents:

Section "InputClass"
	Identifier "Trackpoint Wheel Emulation"
	MatchProduct "Trackpoint"
	MatchDevicePath "/dev/input/dev*"
	Driver "evdev"
	Option "EmulateWheel" "true"
	Option "EmulateWheelButton" "2"
	Option "Emulate3Buttons" "3"
	Option "XAxisMapping" "6 7"
	Option "YAxisMapping" "4 5"
EndSection	

All in all I’m very happy with the X100e. It’s a small but sturdy laptop with an excellent screen and a great keyboard.

Some links:
An excellent review of the Lenovo Thinkpad X100e
A recent review at AnandTech
Ubuntu kernel PPA
ThinkWiki page for the X100e, has lots of info on running Linux on this laptop.
A blog about installing Ubuntu Linux on the X100e, the problems mentioned in that post and its comments have now been solved (if you install the 2.6.35 kernel from the PPA). I tried the gpointing-device-settings package for some time (to get the trackpoint scroll functionality to work), but its settings didn’t survive across reboots or even after hibernating, so I removed it again.

Related Images:

Linux, the Logitech Trackman Marble and emulating a scroll wheel

At work I recently came across a trackball. It was about to be thrown away and since I’d never really used one I decided to take it home and try it out. It’s a Logitech Trackman Marble, still for sale on Logitech’s website.

The trackball features four buttons: two large ones for left and right-clicking and to smaller ones that work as back and forward buttons in Firefox, for example.

After plugging it into my PC it was instantly recognised by X (I’m using Ubuntu 10.04). There’s no middle mouse button, but that can be emulated by clicking the left and right mouse buttons at the same time (something I’ve been use to on older laptops, and, well, even from the time that some of the mouses I owned only had two buttons). However, I did miss my scroll wheel. A quick search on the Internet brought me to Rob Meerman’s website where he explains a lot about the Trackman and how it works in X. He even has a special section on Ubuntu 10.04. In short it comes down to these commands:

xinput set-int-prop "Logitech USB Trackball" "Evdev Wheel Emulation Button" 8 8
xinput set-int-prop "Logitech USB Trackball" "Evdev Wheel Emulation" 8 1

Unfortunately the changes made by these commands are not persistent across reboots. I’ll try to fix that later.

EDIT: To add middle mouse button emulation and horizontal scrolling (thanks to rejistania below) run:

xinput set-int-prop "Logitech USB Trackball" "Evdev Middle Button Emulation" 8 1
xinput set-prop "Logitech USB Trackball" "Evdev Wheel Emulation Axes" 6 7 4 5

END EDIT

Regarding the use of a trackball compared to an ordinary mouse my experiences so far have been very positive. It didn’t take me a lot of time to get used to it. Also precision placement of the pointer doesn’t seem to be more difficult that with a regular mouse. So for now my wireless Logitech mouse can take a holiday :-). The nicest think about the trackball is the fact that you don’t have to move the whole device. So it’s less ‘weight lifting’. Also, the fact that the ball (in combination with the small button) is the scroll wheel, makes for a relatively heavy wheel without much friction, so scrolling large distances can simply be done by giving the ball a good spin. Nice!

Related Images:

Script that converts a Squirrelmail address book to vcf format

Today I installed Roundcube as a replacement for my Squirrelmail webmail setup. All went well, but Roundcube only accepts adresses in vCard format (.vcf-format), whereas Squirrelmail only exports in CSV format. To solve this I wrote the following script that converts a Squirrelmail address book to vCard format. The results are sent to stdout, so run it like this: abook2vcf.awk user@example.com.abook > my_addresses.vcf.
On my Debian install the Squirrelmail address books (files ending in .abook are located in /var/lib/squirrelmail/data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/awk
#
# This script converts a Squirrelmail address book to 
# vcards for import in Roundcube for example.
BEGIN{
    FS="|"
}
 
{
    full_name  = $1;
    first_name = $2;
    last_name  = $3;
    email      = $4;
 
    print("BEGIN:VCARD");
    print("VERSION:3.0");
    printf("FN:%s\n", full_name);
    printf("N:%s;%s;;;\n", last_name, first_name);
    printf("EMAIL;type=INTERNET;type=HOME;type=pref:%s\n", email);
    print("END:VCARD");
}

Related Images:

Using Windows AD for Apache authentication

Recently I was setting up a Subversion repository (on a Linux server) that needs to be accessed via HTTP. Users should be able connect to the repositories without authentication, but authentication is needed to perform write actions. Of course Apache’s htpasswd/htaccess combination would provide just that, but since we have a Windows 2008 Active Domain controller that provides authentication to our Windows machines I thought it would be a good idea to use it.

Configuration of the autentication and authorization is done by Apache’s mod_authnz_ldap and (on Red Hat EL) configured in /etc/httpd/conf.d/subversion.conf (which exists after installing the subversion package with yum.

Basic configuration with htaccess
For simple authentication with Apache’s htaccess mechanism the config looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
   DAV svn
   SVNParentPath /var/www/svn
   SVNReposName "My company's repository"
 
   # Limit write permission to list of valid users.
   <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization Realm for SVN"
      AuthUserFile /etc/httpd/conf.d/svn_htpasswd
      Require valid-user
 
   </LimitExcept>
</Location>

After using htpasswd to create a file with usernames and passwords on the server users could commit to the repository.

Configuration for AD Global Catalog
The first LDAP-like construction I got working was when using the AD Global Catalog. Normal LDAP traffic uses port 389, but the AD’s Global Catalog uses port 3268. The username needed to commit with SVN is windows_logon_name@your_AD.suffix, the so-called userPrincipalName. Here, your_AD and suffix are the DC’s of the LDAP/AD tree. By using this userPrincipalName users from different DC trees can be authenticated. The configuration file looks this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
LoadModule dav_svn_module     modules/mod_dav_svn.so
LoadModule authz_svn_module   modules/mod_authz_svn.so
 
<Location /repos>
    DAV svn
    SVNParentPath /var/www/svn
    SVNReposName "My company's repository"
 
    # Limit write permission to list of valid users.
    <LimitExcept GET PROPFIND OPTIONS REPORT>
      AuthType Basic
      AuthName "Authorization using your LDAP account"
      AuthBasicProvider ldap
      AuthzLDAPAuthoritative off
      # Active Directory requires an authenticating DN to access records
      AuthLDAPBindDN "svntest@your_AD.suffix"
 
      # This is the password for the AuthLDAPBindDN user in Active Directory
      AuthLDAPBindPassword "some_good_password"
 
      # The LDAP query URL
      AuthLDAPURL "ldap://ldap.your_company.com:3268/?userPrincipalName?sub"
      AuthUserFile /dev/null
 
      # Require a valid user
      Require valid-user
    </LimitExcept>
</Location>

With this configuration I could commit with this command: svn commit -m "First AD test" --username your_windows_username@your_AD.suffix.

Configuration for AD + Windows logon Name
As mentioned earlier, the previous method allows people from different parts of the AD tree to log in. In order to restrict access to for example a specific OU, the AuthLDAPURL has to be changed. In our case the LDAP tree is not a simple OU=Users,DC=our_company,DC=com, but consists of several nested OU structures. I used the adsiedit.msc snapin (ADSI editor) on the AD controller to find out the exact structure, since I needed to find out which parts were CNs and which where OUs.
In order to authenticate against a the windows login names in a certain sub-OU the AuthLDAPURL is

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1, DC=your_AD, DC=suffix?sAMAccountName?sub?(objectClass=*)"

Configuration for AD + Windows Display Name
If you want the users to use their common name (the Display Name in the AD) use:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?cn"

Users can now commit with: svn commit -m "Another AD test" --username "Firstname Lastname".

Configuration for AD + another field
In our case login authentication on the Linux/UNIX machines is not done through the AD. Furthermore, the user names are not synchronised between Linux and Windows. This poses a small inconvenience, since by default an svn commit uses the Linux username. As the AD doesn’t know about this name, the first authentication fails. subsequently Apache asks for the user name, and then the user can enter his Windows AD credentials (principle name, display name or windows login name, depending on which of the above configurations was chosen). So as a quick workaround (and just to see if I could get it to work) I entered my Linux user name into the Office field in the AD. In the ADSI Editor I found the real name of the field: physicalDeliveryOfficeName With the following AuthLDAPURL I could use the Office field to authenticate me:

AuthLDAPURL "ldap://ldap.your_company.com:389/OU=Group 1, OU=Location 1 DC=your_AD, DC=suffix?physicalDeliveryOfficeName"

Now a simple svn commit works.

Some useful links:

Related Images:

Script to tunnel RDP connections through stepping stone server using SSH

In order to connect to the servers at work we need to connect through a stepping stone host (via SSH). Since some of the servers are MS Windows machines which can be managed via the Remote Desktop Protocol (RDP), this traffic needs to be tunneled over SSH as well.
I wrote the following bash script to automate setting up the tunnel. It sets some default variables and then looks for an available port between 1234 and 1254 (chosen completely arbitrarily) and uses it for the tunnel. Then it uses the rdesktop program to start the RDP connection.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
#!/bin/bash
#
# This script makes an ssh tunnel to a stepping stone server
# and uses it to start an rdesktop connection to the machine 
# given as the first argument of the script. 
#
# (C) L.C. Karssen
# $Id: winremote.sh,v 1.14 2010/02/10 13:03:08 lennart Exp $
#
 
# User-configurable variables
ssh_username=your_steppingstone_username_here
steppingstone=steppingstone.your_company.com
rdesktop_username=your_windows_username_here
rdesktop_domain=your_windows_domain_here
rdesktop_opts="-z -g 1024x768 -a 16"
rdesktop_port=3389 # This is the standard MS rdesktop port
 
 
# For ordinary users it should not be necessary to change anything below this line. 
# Some functions:
usage()
{
    cat <<EOF
Usage:
    $program [options] rdesktop_hostname 
 
Make a remote desktop connection through an SSH tunnel.
 
Options: 
    -h, --help                                   print this help message
    -s, --steppingstone steppingstone_hostname   set other stepping stone host
                                                   (default: $steppingstone)
    -t, --timeout sec                            set timeout (default: 1)
    -v, --verbose                                verbose output
     --version                                   print version
 
Note that some customisations need to be made in the first few lines of this 
script (e.g. user names and other defaults)
EOF
}
 
program=`basename $0`
 
# Command line option parsing. Shift all options 
verbose=
timeout=1
 
while [ $# -gt 0 ]
do 
    case $1 in
	-v | --verbose | -d | --debug ) 
	    verbose=true
	    ;;
	--version )
	    echo '$Revision: 1.14 $'
	    exit 0
	    ;;
	-t | --timeout ) 
	    shift
	    timeout="$1"
	   if [ $timeout -lt 1 ]; then
	       timeout=1
	   fi
	   if [ $verbose ]; then
	       echo "Timeout set to $timeout"
	   fi
	   ;;
	-s | --steppingstone ) 
	   shift
	   steppingstone="$1"
	   if [ $verbose ]; then
	       echo "Steppingstone server is $steppingstone"
	   fi
	   ;;
	-h | --help ) 
	   usage
	   exit 0
	   ;;
	-*) 
	   echo "$0: invalid option $1" >&2
 	   usage
	   exit 1
	   ;;
	*) 
	   break
	   ;;
    esac
    shift
done
 
# Server name (as seen on the steppingstone) that we want to connect to:
rdesktop_server=$1 
 
################### Config done, let's get to work ########################
 
# Simple usage description
if [ "$rdesktop_server" == "" ]; then
    echo "Error: No rdesktop host given" >&2
    usage
    exit 1
fi
 
# Find a free port on the local machine that we can use to connect through
min_port=1234
max_port=1254
used_ports=(`netstat -tan | awk '{print $4}' | grep 127.0.0.1 | awk -F: '{print $2}' | sort -g`)
if [ $verbose ]; then
    echo "Used ports are: ${used_ports[@]}"
fi
 
# In the next line we first print the $used_ports as an array, but with 
# each port on a single line. This is then piped to an awk script that 
# puts all the values in an array and subsequently walks through all ports 
# from $min_port to $max_port in order to find the first port that is not 
# in the array. This port is printed.
local_port=`printf "%i\n" ${used_ports[@]} | \
    awk -v minp=$min_port -v maxp=$max_port \
    '{ array[$1]=1 } END { for (i=minp; i<=maxp; i++) { if (i in array) continue; else { print i; break } } }'`
if [ "$local_port" == "" ]; then
    echo "Error: No ports free! Exiting..." >&2
    exit 2
fi
if [ $verbose ]; then
    echo "Selected port was: $local_port"
fi
 
# Create tunnel:
if [ $verbose ]; then
    echo "Creating SSH tunnel..."
fi
ssh_opts="-f -N -L"
ssh $ssh_opts $local_port:$rdesktop_server:$rdesktop_port \
    $ssh_username@$steppingstone 
 
# Allow the ssh tunnel to be established
sleep $timeout
 
# Abort if tunnel has not been established
pidof_ssh=`pgrep -f "ssh $ssh_opts $local_port"`
if [ "$pidof_ssh" == "" ]; then
    echo "Error: Timeout while establishing tunnel" >&2
    echo "Exiting..."
    exit 3
fi
 
# Make rdesktop connection
if [ $verbose ]; then
    echo "Opening Remote desktop connection to $rdesktop_server..."
fi
rdesktop $rdesktop_opts -u $rdesktop_username -p - \
    -d $rdesktop_domain localhost:$local_port
 
# Clean up tunnel
if [ $verbose ]; then
    echo "Cleaning up SSH tunnel with pid $pidof_ssh and local port $local_port"
fi
kill $pidof_ssh

Related Images:

« Older posts Newer posts »

© 2025 Lennart's weblog

Theme by Anders NorĂ©nUp ↑