Notes about open source software, computers, other stuff.

Tag: script (Page 1 of 2)

Bulk downloading and renaming of Expensify PDF reports

In my company, we have been using Expensify to manage small receipts, travel expenses, etc. Recently, however, I decided to switch to another platform that is part of the SAAS platform that our accountant uses. Even though it lacks some of the functionality provided by Expensify, having all receipts in a single location reduces the amount of time I have to spend on administrative tasks.

Every quarter Dutch companies have to file a VAT report, which meant I exported the Expensify reports to CSV files (to send to my accountant) and in PDF form, as a more “visual” backup, which lists the reported expenses, sorted in categories, and, importantly, also includes the scans on the various receipts.

As we changed accountants a couple of years ago, I wasn’t sure whether I had actually downloaded both the CSV and the PDF file for each Expensify report. Keeping records is required by Dutch law, so I decided to make sure and download all PDF files and back them up somewhere.

Unfortunately, the Expensify website doesn’t offer an option for bulk downloading of the PDF files. They do offer a kind of REST API (they call it the Integration Server), that I had played with years ago, so I decided to try that. Luckily, the credentials I had saved in my password manager still worked.

The process for downloading the PDFs consists of two steps:

  • Run a command to generate the reports, this returns the file names for the PDF files.
  • Use those names to download the PDFs

The first step took a couple of minutes to run and then listed the filenames for the PDF on stdout:

curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d 'requestJobDescription={
        "type":"file",
        "credentials":{
            "partnerUserID":"XXXXXXXXXX",
            "partnerUserSecret":"YYYYYYYYYY"
        },
        "onReceive":{
            "immediateResponse":["returnRandomFileName"]
        },
        "inputSettings":{
            "type":"combinedReportData",
            "filters":{
                "startDate":"2013-01-01"
            }
        },
        "outputSettings":{
            "fileExtension":"pdf",
            "includeFullPageReceiptsPdf":"true"
        }
    }' \
    --data-urlencode 'template@expensify_template.ftl'

I’m not sure what the expensify_template.ftl file does in this command, but it was necessary to create that file locally, otherwise the curl call would return an error. I simply copied the example from the sample provided in the documentation for the Expensify Integration Server. I made a copy of the long list of PDF filenames output by the above command. A typical filename would look like this: exportc992bd79-aa4a-4b04-a76a-1149194bac94-34589514.pdf. Not very descriptive… As expected (confirmed in the web UI), there were 191 file names.

Next, step two: actually downloading the files. The basic call for that is:

curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d 'requestJobDescription={
        "type":"download",
        "credentials":{
            "partnerUserID":"XXXXXXXXXX",
            "partnerUserSecret":"YYYYYYYYYY"
        },
        "fileName":"exportc992bd79-aa4a-4b04-a76a-1149194bac94-5803035.pdf",
        "fileSystem":"integrationServer"}
    }' \
    --data-urlencode 'template@expensify_template.ftl' --output "my_output.pdf"

So, in order to download all PDFs, I saved all file names in the file pdflist. All PDF file names are unique:

$ wc -l pdflist
191 pdflist
$ sort pdflist| uniq | wc -l
191

Next, I used a loop to read each line from the pdflist file and fiddled a bit with the quotes so I could use the pdf variable in the Curl call and download each file:

cat pdflist | while read pdf; do
curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d "requestJobDescription={
        'type':'download',
        'credentials':{
            'partnerUserID':'XXXXXXXXXX',
            'partnerUserSecret':'YYYYYYYYYY'
        },
        'fileName':${pdf},
        'fileSystem':'integrationServer'}
    }" \
    --data-urlencode 'template@expensify_template.ftl' --output ${pdf}
done

This indeed gave me 191 Expensify report PDFs, with very uninformative names 😐 . To fix that I resorted to some more shell “scripting”. Every report has a title (usually something like “Small expenses 2020 Q4”) and by using the pdftotext utility, it looked like this was always on the third line of the pdftotext output. So I moved the original PDFs to a separate “archive” directory OriginalExports and ran the following to make a copy of each PDF to a new name that was equal to its title. My first attempt failed somewhat, because the number of renamed PDF files as smaller than the number of original PDFs. I guessed this would happen when two reports have the same name, and indeed, adding -i to the cp command to warn me of this showed I was right. As this was only happening for four files, I manually converted those.

for pdf in OriginalExports/export*.pdf; do
echo ${pdf}
title=$(pdftotext ${pdf} - | head -3 | tail -1 | tr "/" "_")
cp -i ${pdf} "${title}.pdf"
done

So there I had my backup of all receipts since we started using Expensify. And if the tax office or the accountant ever want to see those receipts, I am now sure I can provide them.

Related Images:

Use a script to convert Office files to PDF via Nautilus’ right-click menu

Recently, I bought a reMarkable 2, a tablet-like device with an e-ink screen that allows me to replace real paper with digital note taking, while keeping the hand-written aspect of writing notes. The device also allows me to read and annotate PDFs in a comfortable way. Given that I use RMfuse to ‘mount’ the reMarkable cloud on my computer, I normally drag-n-drop the files I want to read in GNOME Files (formerly Nautilus, GNOME’s file manager) from their location on my computer to the mounted cloud.

This works really well so far. However, I also regularly receive files in the MS Office .docx format. Often I need to make substantial changes in these documents, which I do on my laptop or computer. But sometimes I only need to read them or only put my signature at the bottom. For these cases I would open the .docx file in LibreOffice, convert it to PDF and copy it to my reMarkable. In order to speed this up, I thought it would be nice to have some way where I right-click on a .docx file in GNOME Files/Nautilus and then convert it to PDF automatically, after which I can drag-n-drop the PDF file to the mounted reMarkable cloud.

So the question was: how can I add an item to the right-click menu of Nautilus, which runs a script when I click on it. After looking around on the internet, this turned out to be quite easy. It turns out that scripts placed in ~/.local/share/nautilus/scripts/ end up in the Nautilus right-click menu under the Scripts submenu.

To do the actual conversion to PDF, I created the following script:

#!/bin/bash
# This script converts the selected file to PDF using LibreOffice
# For general instructions on how to use Nautilus scripts, see
# https://help.ubuntu.com/community/NautilusScriptsHowto
#
# Save this script in ~/.local/share/nautilus/scripts/ and make it
# executable.

IFS_BAK=$IFS
IFS="
"

for SelectedFile in ${NAUTILUS_SCRIPT_SELECTED_FILE_PATHS}; do
    soffice \
        --nodefault \
        --nolockcheck \
        --nologo \
        --norestore \
        --nofirststartwizard \
        --convert-to pdf "${SelectedFile}"
done

IFS=$IFS_BAK

Because I wanted to be able to select multiple files, some having spaces in their names, I had to make sure the space character wasn’t used to split the NAUTILUS_SCRIPT_SELECTED_FILE_PATHS variable. That is why I temporarily replace the IFS variable with a newline.

I have only tried this on .docx files so far, but I guess it would work on presentations and spreadsheets as well.

Related Images:

Moving annual backups from an external disk with Ext4 to an external disk with ZFS

For a few years I have used the Christmas holidays to create a full
backup of my /home on an external hard disk. For that I used a
Bash script around rsync that uses hard links to keep the used disk
space under control. Each backup was saved in a directory named with
the date of the backup. POSIX ACLs were also backed up.

Since last year’s backup I have moved to ZFS (using ZFS on Linux
with Ubuntu 14.04
) as filesystem for /home (and others). Since ZFS
makes checksums of data and metadata it has the possibility to
detect corrupted files (and if the data is redundant it can also fix
them). This is a feature I’d like to have for my backups as
well: I’d rather know it when corruption occurs than live in
ignorance.

So the plan is to move the old backups from the external disk to the
ZFS pool in my server. and instead of using hard links I’ll transfer
the backups in order from old to new to the ZFS pool making a
snapshot for each. Additionally I will also turn on compression
(using the lz4 algorithm). Once that is done I will reformat the
external drive and create a ZFS pool called “JaarlijkseBackupPool” on
it (jaarlijks means annual in Dutch).

The old situation

In the current/old situation, this is how much disk space is used
on the external disk (with and without taking the hard links into
account):

$ sudo du -csh /mnt/JaarlijkseBackups/*
102G    /mnt/JaarlijkseBackups/2010-11-28
121G    /mnt/JaarlijkseBackups/2013-02-04
101G    /mnt/JaarlijkseBackups/2013-12-23
324G    total
$ sudo du -clsh /mnt/JaarlijkseBackups/*
102G    /mnt/JaarlijkseBackups/2010-11-28
193G    /mnt/JaarlijkseBackups/2013-02-04
255G    /mnt/JaarlijkseBackups/2013-12-23
549G    total

Copying the data from the Ext4 disk to a temporary ZFS filesystem on my server

The ZFS pool in my server is called storage. In order to save the
POSIX ACLs of the Ext4 system, they need to be enabled when
creating the ZFS filesystem as well. Setting xattr=sa means the
ACLS are stored more efficiently (although this option is not
compatible with other ZFS implementations at this time, so if I
would try to import the ZFS pool in FreeBSD for example, that
information would be inaccessible).

$ zfs create storage/JaarlijkseBackupsOrganized \
      -o compression=lz4 \
      -o acltype=posixacl \
      -o xattr=sa
$ sudo rsync -ahPAXHS --numeric-ids \
     /storage/JaarlijkseBackups/2010-11-28/ \
     /storage/JaarlijkseBackupsOrganized
$ zfs snapshot storage/JaarlijkseBackupsOrganized@2010-11-28

Followed by the same for the same rsync and zfs snapshot
commands for the other two dates.
Once that is finished, this is the status of that ZFS FS:

$ zfs list -r -t all storage/JaarlijkseBackupsOrganized
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
storage/JaarlijkseBackupsOrganized              275G   438G   272G  /storage/JaarlijkseBackupsOrganized
storage/JaarlijkseBackupsOrganized@2010-11-28  1,03G      -  88,9G  -
storage/JaarlijkseBackupsOrganized@2013-02-04  2,33G      -   196G  -
storage/JaarlijkseBackupsOrganized@2013-12-23      0      -   272G  -
$ zfs get -r -t all compressratio storage/JaarlijkseBackupsOrganized
NAME                                           PROPERTY       VALUE  SOURCE
storage/JaarlijkseBackupsOrganized             compressratio  1.13x  -
storage/JaarlijkseBackupsOrganized@2010-11-28  compressratio  1.19x  -
storage/JaarlijkseBackupsOrganized@2013-02-04  compressratio  1.14x  -
storage/JaarlijkseBackupsOrganized@2013-12-23  compressratio  1.12x  -

Partitioning the external disk

The external disk is as 1TB Samsung SATA 3Gbps SpinPoint F2 EcoGreen disk
(type HD103SI, serial number: S1VSJD6ZB02657). The disk uses 512B
sectors:

sudo hdparm -I /dev/sdf |grep Sector
     Logical/Physical Sector size:           512 bytes

Before using it with ZFS, it needs to be partitioned. I used
parted:

$ parted /dev/sdf
GNU Parted 2.3
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1000GB  1000GB  primary  ext4

(parted) mklabel
New disk label type? gpt
(parted) u
Unit?  [compact]? MB
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000205MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

(parted) mkpart
Partition name?  []? JaarlijkseBackups-HD103SI-S1VSJD6ZB02657
File system type?  [ext2]? zfs
Start? 1M
End? 1000204M
(parted) p
Model: ATA SAMSUNG HD103SI (scsi)
Disk /dev/sdf: 1000205MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End        Size       File system  Name                                  Flags
 1      1,05MB  1000204MB  1000203MB  ext4         JaarlijkseBackups-HD103SI-S1VSJD6ZB0

(parted) q

This removes the old partition table and creates a new GPT
partition table (which allows naming partitions). Next I set the
units to MB so I can leave 1MB at the beginning and end of the
partition (can be helpful when importing this pool in
e.g. FreeBSD). The disk also shows up in /dev/disk/by=partlabel
now.

Creating the new ZFS pool

$ zpool create -o ashift=9 JaarlijkseBackupPool \
    /dev/disk/by-partlabel/JaarlijkseBackups-HD103SI-S1VSJD6ZB0
$ zpool status JaarlijkseBackupPool
  pool: JaarlijkseBackupPool
 state: ONLINE
  scan: none requested
config:

        NAME                                    STATE     READ WRITE CKSUM
        JaarlijkseBackupPool                    ONLINE       0     0     0
          JaarlijkseBackups-HD103SI-S1VSJD6ZB0  ONLINE       0     0     0

errors: No known data errors

Migrating the data

Now that the new ZFS pool and filesystem are all in place, it is
time to move the backups to the new place, starting with the oldest
backup. The -R option also make sure the attributes like
compression and xattr are transferred to the new FS. The
following commands send each snapshot to the new pool (the -n
option of zfs receive is for doing a dry run, just to show how it
works). After the first snapshot is sent, the other two are sent
using the -i option to zfs send so that only the incremental
differences between the snapshots are sent.

$ zfs send -vR storage/JaarlijkseBackupsOrganized@2010-11-28 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups
$ zfs send -vR -i storage/JaarlijkseBackupsOrganized@2010-11-28 \
    storage/JaarlijkseBackupsOrganized@2013-02-04 | \
    zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups
$ zfs send -vR -i storage/JaarlijkseBackupsOrganized@2013-02-04 \
      storage/JaarlijkseBackupsOrganized@2013-12-23 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups -n
send from @2013-02-04 to storage/JaarlijkseBackupsOrganized@2013-12-23 estimated size is 84,3G
total estimated size is 84,3G
TIME        SENT   SNAPSHOT
would receive incremental stream of storage/JaarlijkseBackupsOrganized@2013-12-23 into JaarlijkseBackupPool@2013-12-23
14:09:16   4,22M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:17   8,46M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:18   18,4M   storage/JaarlijkseBackupsOrganized@2013-12-23
14:09:19   24,8M   storage/JaarlijkseBackupsOrganized@2013-12-23
^C
$ zfs send -vR -i  storage/JaarlijkseBackupsOrganized@2013-02-04 \
      storage/JaarlijkseBackupsOrganized@2013-12-23 | \
      zfs receive -Fvu JaarlijkseBackupPool/oldRsyncBackups

Add this year’s backup

At first I tried to add the new backups also to the
oldRsyncBackups FS, but that didn’t work (at least not with an
incremental backup), so I ended up making a new backup. The extra
cost in disk space is not a real problem. Disk space is rather
cheap and the current configuration will last me at least one more
year. So after creating a snapshot called 2014-12-26 of my
/home I ran:

   $ zfs send -v  storage/home@2014-12-26 | \
      zfs receive -Fu JaarlijkseBackupPool/home
$ zfs list -r -t all JaarlijkseBackupPool
NAME                                              USED  AVAIL  REFER  MOUNTPOINT
JaarlijkseBackupPool                              581G   332G    30K  /JaarlijkseBackupPool
JaarlijkseBackupPool/home                         311G   332G   311G  /JaarlijkseBackupPool/home
JaarlijkseBackupPool/home@2014-12-26             51,2M      -   311G  -
JaarlijkseBackupPool/oldRsyncBackups              271G   332G   267G  /JaarlijkseBackupPool/oldRsyncBackups
JaarlijkseBackupPool/oldRsyncBackups@2010-11-28   974M      -  87,1G  -
JaarlijkseBackupPool/oldRsyncBackups@2013-02-04  2,23G      -   193G  -
JaarlijkseBackupPool/oldRsyncBackups@2013-12-23      0      -   267G  -
$ zfs get -r compressratio JaarlijkseBackupPool
NAME                                             PROPERTY       VALUE  SOURCE
JaarlijkseBackupPool                             compressratio  1.15x  -
JaarlijkseBackupPool/home                        compressratio  1.17x  -
JaarlijkseBackupPool/home@2014-12-26             compressratio  1.17x  -
JaarlijkseBackupPool/oldRsyncBackups             compressratio  1.13x  -
JaarlijkseBackupPool/oldRsyncBackups@2010-11-28  compressratio  1.19x  -
JaarlijkseBackupPool/oldRsyncBackups@2013-02-04  compressratio  1.14x  -
JaarlijkseBackupPool/oldRsyncBackups@2013-12-23  compressratio  1.12x  -

Finishing up

In order to be able to disconnect the external drive without
damaging the filesystems use

zpool export JaarlijkseBackupPool

Later, the drive/pool can be imported using the zpool import
command.

Now that the migration is done, the intermediate filesystem
(including the snapshots) can also be removed:

zfs destroy -r storage/JaarlijkseBackupsOrganized

For reference: the old rsync script

#!/bin/sh
#
# Time-stamp: <2013-02-04 16:48:31 (root)>
# This scripts helps me create my annual backups to an external hard
# disk. The script uses rsync's hard link option to make hard links to
# the previous backups for files that haven't changed. It makes the
# backup based on an LVM snapshot it creates of the LV that contains
# the /home partition.
# This script needs to be run as root.
 
today=`date +%F`
olddate="2013-02-04"
 
srcdir="/mnt/backupsrc/"
destdir="/mnt/backupdest/JaarlijkseBackups/$today"
prevdir="/mnt/backupdest/JaarlijkseBackups/$olddate"
 
# LVM options
VG=raid5vg
LV=home
 
# rstnc options
options="-ahPAXHS --numeric-ids"
exclusions="--exclude 'lost+found/'"
#  --exclude '*/.thumbnails'"
# exclusions="$exclusions --exclude '*/.gvfs/'"
# exclusions="$exclusions --exclude '*/.cache/' --exclude '**/Cache'"
# exclusions="$exclusions --exclude '*/.recycle/'"
 
# Check to see if the previous backup directory exists
if [ ! -d $prevdir ]; then
    echo "Error: The directory with the previous back up ($prevdir) doesn't exist" 1>&2
    exit 1
fi
 
# Make a snapshot of the home LV that we can backup
lvcreate -L15G -s -n snap$LV /dev/$VG/$LV
mount /dev/$VG/snap$LV $srcdir
 
 
# Start the backup, first a dry-run, then the full one
rsynccommand="rsync $options $exclusions --link-dest=$prevdir $srcdir $destdir"
 
$rsynccommand -n
 
# Wait for user input
echo "This was a dry run. Press a key to continue with the real stuff or"
echo "hit Ctrl-c to abort."
read dummy
 
$rsynccommand

Related Images:

Using rsync to backup a ZFS file system to a remote Synology Diskstation

Some time ago I moved from using LVM to using ZFS on my home server. This meant I also had to change the backup script I used to make backups on a remote Synology Diskstation. Below is the updated script. I also updated it such that it now needs a single command line argument: the hostname of the Diskstation to backup to (because I now have two Diskstations at different locations). If you want to run this script from cron you should set up key-based SSH login (see also here and here).

#!/bin/bash
#
# This script makes a backup of my home dirs to a Synology DiskStation at
# another location. I use ZFS for my /home, so I make a snapshot first and
# backup from there.
#
# This script requires that the first command line argument is the
# host name of the remote backup server (the Synology NAS). It also
# assumes that the location of the backups is the same on each
# remote backup server.
#
# Time-stamp: <2014-10-27 11:35:39 (L.C. Karssen)>
# This script it licensed under the GNU GPLv3.
 
set -u
 
if [ ${#} -lt 1 ]; then
    echo -n "ERROR: Please specify a host name as first command" 1>&2
    echo " line option" 1>&2
    exit -1
fi
 
###############################
# Some settings
###############################
# Options for the remote (Synology) backup destination
DESTHOST=$1
DESTUSER=root
DESTPATH=/volume1/Backups/
DEST=${DESTUSER}@${DESTHOST}:${DESTPATH}
 
# Options for the client (the data to be backed up)
# ZFS options
ZFS_POOL=storage
ZFS_DATASET=home
ZFS_SNAPSHOT=rsync_snapshot
SNAPDIR="/home/.zfs/snapshot/$ZFS_SNAPSHOT"
 
# Backup source path. Don't forget to have trailing / otherwise
# rsync's --delete option won't work
SRC=${SNAPDIR}/
 
# rsync options
OPTIONS="--delete -azvhHSP --numeric-ids --stats"
OPTIONS="$OPTIONS --timeout=60 --delete-excluded"
OPTIONS="$OPTIONS --skip-compress=gz/jpg/mp[34]/7z/bz2/ace/avi/deb/gpg/iso/jpeg/lz/lzma/lzo/mov/ogg/png/rar/CR2/JPG/MOV"
EXCLUSIONS="--exclude lost+found --exclude .thumbnails --exclude .gvfs"
EXCLUSIONS="$EXCLUSIONS --exclude .cache --exclude Cache"
EXCLUSIONS="$EXCLUSIONS --exclude .local/share/Trash"
EXCLUSIONS="$EXCLUSIONS --exclude home/lennart/tmp/Downloads/*.iso"
EXCLUSIONS="$EXCLUSIONS --exclude home/lennart/.recycle"
EXCLUSIONS="$EXCLUSIONS --exclude _dev_dvb_adapter0_Philips_TDA10023_DVB*"
 
 
 
###############################
# The real work
###############################
 
# Create the ZFS snapshot
if [ -d $SNAPDIR ]; then
    # If the directory exists, another backup process may be running
    echo "Directory $SNAPDIR already exists! Is another backup still running?"
    exit -1
else
    # Let's make snapshots
    zfs snapshot $ZFS_POOL/$ZFS_DATASET@$ZFS_SNAPSHOT
fi
 
 
# Do the actual backup
rsync -e 'ssh' $OPTIONS $EXCLUSIONS $SRC $DEST
 
# Remove the ZFS snapshot
if [ -d $SNAPDIR ]; then
    zfs destroy $ZFS_POOL/$ZFS_DATASET@$ZFS_SNAPSHOT
else
    echo "$SNAPDIR does not exist!" 1>&2
    exit 2
fi
 
exit 0

Related Images:

Using ‘expect’ to distribute files among users

I’m currently teaching at the Summmer School in Statistical Omics in Split, Croatia. A great experience!

Because of the computations involved in the project work, we have access to a server. However, since the machine is part of a university cluster, I haven’t been given full root permissions (in fact, I’m only allowed to use sudo to install packages).

Now, the problem I had to solve was that I needed to distribute a certain file (.Renviron) to each student’s home directory. Normally I’d use sudo to do that, but the admin hadn’t allowed me to use cp via sudo. Furtunately, I had a list of user names and passwords for the students (because I had to distribute those), so I thought I’d use su - to change to each student’s account and copy the file, something along the lines of

echo PASSWORD | su -

and then loop over each account. Unfortunately, while testing the script I found out it wouldn’t work since su complained:

su: must be run from a terminal

Then I remembered the expect tool, which executes commands based on what it ‘sees’ on the command line. In this case I wanted it to enter the password at su‘s prompt. This is the expect script I came up with, it accepts two command line arguments, the user name and the password:

#!/usr/bin/expect -f
 
set user [lindex $argv 0]
set pass [lindex $argv 1]
 
spawn su - $user
expect "Password: "
send "$pass\r"
expect "$ "
send "cp -i /common/WORK/school/lennart/.Renviron .\r"
expect "$ "
send "ls -l .Renviron\r"
expect "$ "
send "exit\r"

The script was wrapped in the Bash script that I had already written:

#!/bin/bash
#
# This script is used to copy files from this directory to the
# home directories of the users listed in $USERFILE.
 
USERFILE=accounts.txt
SRCFILE=/common/WORK/school/lennart/.Renviron
 
while read user passw; do
    ./copy_file_to_users.expect $user $passw
done < $USERFILE

Related Images:

Fixing colours in git output after upgrading to Ubuntu 14.04

After upgrading my Ubuntu 13.10 installation to 14.04, I noticed that the output of several git commands (e.g. git diff and git log) didn’t show colours as they used to, but showed ESC[ ANSI codes instead.
A quick internet search lead to this post on unix.stackexchange.com where the LESS environment variable was ‘blamed’. Indeed, I have my LESS variable (re-) defined in my .bashrc and .zshrc files.

The solution was to add -R to the environment variable, which allows raw control characters to be displayed. I now have the environment variable defined as:

LESS='--quiet -X -F -R'

Related Images:

Permantly ban an IP address with fail2ban

Over the last few days I noticed in my logwatch e-mails that one IP address kept trying to log in to my server, even though it was blocked regularly by fail2ban.

Here’s a post that explains how to simply add a list of IP addresses to block permanently. There’s only one catch: the listing provided there contains an error, the word <name> is missing in the iptables command, probably due to HTML conversion. This is the correct line to be insterted into the actionstart section of /etc/fail2ban/action.d/iptables-multiport.conf:

cat /etc/fail2ban/ip.blacklist | while read IP; do iptables -I fail2ban-<name> 1 -s $IP -j DROP; done

Use the following command to check if the IP address is indeed banned:

$ sudo iptables  -L fail2ban-ssh
Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
DROP       all  --  192.168.20.25        anywhere            
RETURN     all  --  anywhere             anywhere 

Related Images:

Replacing a character in a Bash variable name

Today I needed to replace a : in a bunch of file names with a -, so I wanted to write a Bash for-loop to do just that. I vaguely remembered that you can do character replacements within variables, but couldn’t remember the details.

This is how it’s done:

for filename in *; do
    mv "$filename" "${filename/:/-}"
done

I put the variables in double quotes, because the file names contained spaces.

Related Images:

Doing a quick fixed-effects meta-analysis using the Rmeta package

This is a quick example of how to do a fixed meta-analysis using the R package Rmeta, just so I dont have to look it up again next time:

## Create data frame containing betas and standard errors
df <- data.frame()
df <- rbind(df, c(2., 0.2))
df <- rbind(df, c(2.5, 0.4))
df <- rbind(df, c(2.2, 0.2))
 
## Add study names
df <- cbind(df, c("study 1", "study 2", "study 3"))
 
colnames(df) <- c("beta", "se_beta", "name") 
 
## Do the meta-analysis 
ms <- meta.summaries(df$beta, df$se_beta, names=df$name)
 
## Add some colors
mc <- meta.colors(summary="darkgreen", zero="red")
 
## Make a forest plot
plot(ms, xlab=expression(beta ~ " (mmol/l)"), 
     ylab="Study", colors=mc, zero=2.6)

The resulting plot looks like this:
Forest plot of fake data

Related Images:

Exit a Bash script if an error occurs

Last week I found out that a Bash script I wrote to do some data QC gave me a false sense of security: a script continues even if one (or more) of the statements in the scripts fails (with an exit status not equal to 0). It turned out that for some of the data sets the QC wasn’t done correctly because I didn’t check the exit status after each step.

My first thought was: oh boy, that means I have to check $? for every step. That means a lot of repetitive code to write! Luckily my colleague came with the answer: add

set -e

at the top of you Bash script and the script will fail if one of its statements fails (for the fine print see the top answer in this is StackOverflow post).

Related Images:

« Older posts

© 2024 Lennart's weblog

Theme by Anders NorénUp ↑