Notes about open source software, computers, other stuff.

Author: LCK (Page 1 of 14)

A bioinfomatician on the move...

Expanding a partition-backed ZFS special device

The spinning disk pool on my home server uses a mirrored special device (for storing metadata and small blocks, see also this blog post at Klara Systems) based on two NVMe SSDs. Because my home server only has two M.2 slots and I wanted to have a pure SSD ZFS pool as well, I partitioned the SSDs. Each SSD has a partition for the SSD pool and one for the special device of the storage pool (which uses a mirror of spinning disks).

Note: This isn’t really a recommended production setup as you are basically hurting performance of both the special device and the SSD pool. But for my home server this works fine. For example, I use the special device’s small blocks functionality to store previews of the photo’s I store on my Nextcloud server. This makes scrolling through the Memories app’s timeline a breeze, even though the full-size photo’s are stored on the spinning disks.

Today, I noticed that the special device had filled up, and, given that there was still some unpartitioned space on the SSDs, I wondered if I could just expand the partition (using parted) used by the special device and then have the ZFS pool recognise the extra space. In the past I have expanded partition-based ZFS pools before, e.g. on after upgrading the SSD on my laptop, but I hadn’t tried this with a special device before.

After some experimentation, I can tell you: this works.

Here is how I tested this on a throw-away file-backed zpool. First create four test files: two for the actual mirror pool and two that I’ll add as a special device.

for i in {0..3} ; do truncate -s 1G file$i.raw ; done
ls -lh
total 4,0K
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file0.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file1.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file2.raw
-rw-rw-r-- 1 lennart lennart 1,0G mrt 11 12:46 file3.raw

Create a regular mirror pool:

zpool create testpool mirror $(pwd)/file0.raw $(pwd)/file1.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                   960M   146K   960M        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   104K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE

Add the special device to the zpool:

zpool add testpool special mirror $(pwd)/file2.raw $(pwd)/file3.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   190K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   190K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M      0   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file2.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file3.raw     1G      -      -        -         -      -      -      -    ONLINE

I wasn’t sure whether I could just truncate the backing files for the special device to a larger size while they were part of the pool, so I detached them one by one and created new ones of 2GB, and then reattached them:

zpool detach testpool /tmp/tests/file2.raw
zpool list -v testpool
truncate -s 2G file2.raw
zpool attach testpool $(pwd)/file3.raw /tmp/tests/file2.raw
zpool list -v testpool

NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   194K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M   104K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  /tmp/tests/file3.raw       1G    90K   960M        -         -     0%  0.00%      -    ONLINE
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   278K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M  97.5K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M   180K   960M        -         -     0%  0.01%      -    ONLINE
    /tmp/tests/file3.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -         -      -      -      -    ONLINE

And for the second “disk”:

zpool detach testpool /tmp/tests/file3.raw
zpool list -v testpool
truncate -s 2G file3.raw
zpool attach testpool $(pwd)/file2.raw /tmp/tests/file3.raw
zpool list -v testpool

NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   218K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    66K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  /tmp/tests/file2.raw       2G   152K   960M        -        1G     0%  0.01%      -    ONLINE
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  1.88G   296K  1.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    54K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                 960M   242K   960M        -        1G     0%  0.02%      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -        1G      -      -      -    ONLINE
    /tmp/tests/file3.raw     2G      -      -        -         -      -      -      -    ONLINE

And now, here comes the magic. Time to expand the pool and see of the special device will grow to 2GB:

zpool online -e testpool $(pwd)/file2.raw $(pwd)/file3.raw
zpool list -v testpool
NAME                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool                  2.88G   226K  2.87G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                 960M    48K   960M        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file0.raw     1G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file1.raw     1G      -      -        -         -      -      -      -    ONLINE
special                       -      -      -        -         -      -      -      -         -
  mirror-1                1.94G   178K  1.94G        -         -     0%  0.00%      -    ONLINE
    /tmp/tests/file2.raw     2G      -      -        -         -      -      -      -    ONLINE
    /tmp/tests/file3.raw     2G      -      -        -         -      -      -      -    ONLINE

Yay! It worked! So, for my actual storage pool, I ended up doing the following:

  • Given that the partitions used by the special device are located at the end of the SSD, it was easy to expand them using parted resizepart.
  • Run zpool online -e storage partname-1 partname-2 (so I didn’t detach/attach here).

Don’t forget to clean up the testpool:

zpool destroy testpool
rm -r /tmp/tests

Related Images:

Bulk downloading and renaming of Expensify PDF reports

In my company, we have been using Expensify to manage small receipts, travel expenses, etc. Recently, however, I decided to switch to another platform that is part of the SAAS platform that our accountant uses. Even though it lacks some of the functionality provided by Expensify, having all receipts in a single location reduces the amount of time I have to spend on administrative tasks.

Every quarter Dutch companies have to file a VAT report, which meant I exported the Expensify reports to CSV files (to send to my accountant) and in PDF form, as a more “visual” backup, which lists the reported expenses, sorted in categories, and, importantly, also includes the scans on the various receipts.

As we changed accountants a couple of years ago, I wasn’t sure whether I had actually downloaded both the CSV and the PDF file for each Expensify report. Keeping records is required by Dutch law, so I decided to make sure and download all PDF files and back them up somewhere.

Unfortunately, the Expensify website doesn’t offer an option for bulk downloading of the PDF files. They do offer a kind of REST API (they call it the Integration Server), that I had played with years ago, so I decided to try that. Luckily, the credentials I had saved in my password manager still worked.

The process for downloading the PDFs consists of two steps:

  • Run a command to generate the reports, this returns the file names for the PDF files.
  • Use those names to download the PDFs

The first step took a couple of minutes to run and then listed the filenames for the PDF on stdout:

curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d 'requestJobDescription={
        "type":"file",
        "credentials":{
            "partnerUserID":"XXXXXXXXXX",
            "partnerUserSecret":"YYYYYYYYYY"
        },
        "onReceive":{
            "immediateResponse":["returnRandomFileName"]
        },
        "inputSettings":{
            "type":"combinedReportData",
            "filters":{
                "startDate":"2013-01-01"
            }
        },
        "outputSettings":{
            "fileExtension":"pdf",
            "includeFullPageReceiptsPdf":"true"
        }
    }' \
    --data-urlencode 'template@expensify_template.ftl'

I’m not sure what the expensify_template.ftl file does in this command, but it was necessary to create that file locally, otherwise the curl call would return an error. I simply copied the example from the sample provided in the documentation for the Expensify Integration Server. I made a copy of the long list of PDF filenames output by the above command. A typical filename would look like this: exportc992bd79-aa4a-4b04-a76a-1149194bac94-34589514.pdf. Not very descriptive… As expected (confirmed in the web UI), there were 191 file names.

Next, step two: actually downloading the files. The basic call for that is:

curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d 'requestJobDescription={
        "type":"download",
        "credentials":{
            "partnerUserID":"XXXXXXXXXX",
            "partnerUserSecret":"YYYYYYYYYY"
        },
        "fileName":"exportc992bd79-aa4a-4b04-a76a-1149194bac94-5803035.pdf",
        "fileSystem":"integrationServer"}
    }' \
    --data-urlencode 'template@expensify_template.ftl' --output "my_output.pdf"

So, in order to download all PDFs, I saved all file names in the file pdflist. All PDF file names are unique:

$ wc -l pdflist
191 pdflist
$ sort pdflist| uniq | wc -l
191

Next, I used a loop to read each line from the pdflist file and fiddled a bit with the quotes so I could use the pdf variable in the Curl call and download each file:

cat pdflist | while read pdf; do
curl -X POST 'https://integrations.expensify.com/Integration-Server/ExpensifyIntegrations' \
    -d "requestJobDescription={
        'type':'download',
        'credentials':{
            'partnerUserID':'XXXXXXXXXX',
            'partnerUserSecret':'YYYYYYYYYY'
        },
        'fileName':${pdf},
        'fileSystem':'integrationServer'}
    }" \
    --data-urlencode 'template@expensify_template.ftl' --output ${pdf}
done

This indeed gave me 191 Expensify report PDFs, with very uninformative names 😐 . To fix that I resorted to some more shell “scripting”. Every report has a title (usually something like “Small expenses 2020 Q4”) and by using the pdftotext utility, it looked like this was always on the third line of the pdftotext output. So I moved the original PDFs to a separate “archive” directory OriginalExports and ran the following to make a copy of each PDF to a new name that was equal to its title. My first attempt failed somewhat, because the number of renamed PDF files as smaller than the number of original PDFs. I guessed this would happen when two reports have the same name, and indeed, adding -i to the cp command to warn me of this showed I was right. As this was only happening for four files, I manually converted those.

for pdf in OriginalExports/export*.pdf; do
echo ${pdf}
title=$(pdftotext ${pdf} - | head -3 | tail -1 | tr "/" "_")
cp -i ${pdf} "${title}.pdf"
done

So there I had my backup of all receipts since we started using Expensify. And if the tax office or the accountant ever want to see those receipts, I am now sure I can provide them.

Related Images:

Fixing delayed syncing with Linux Nextcloud client

Recently, I noticed that changed files were not picked up by the Nextcloud client as fast as before. As a result I sometimes missed a file (or changes in a file) on my laptop that had been created on my desktop PC.

Today, I tried to run tail and got the following message:

tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling

This made me realise that the problem with the delayed client sync could be related to the inotify system for monitoring file changes.

It turns out there is a maximum to the number of inotify file watches:

sysctl fs.inotify
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 524288

Given that on my desktop the Nextcloud client syncs about 235 GB with my personal Nextcloud server and about 10 GB to two servers for work (including several Git repositories), I could imagine the 65536 watches is not enough. Indeed, manually increasing that number made file syncs more or less instantaneous again:

sysctl -n -w fs.inotify.max_user_watches=524288
sysctl fs.inotify
524288
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 524288

To make these changes permanent I added the following lines at the bottom of /etc/sysctl.conf:

grep -A1 inotify /etc/sysctl.conf
# Increase the number of inotify watches so that the Nextcloud client
# keeps syncing without delay
fs.inotify.max_user_watches=524288

Side note: to apply the changes in the /etc/sysctl.conf file use sudo sysctl -p.

Related Images:

Don’t install Citrix’s AppProtection!

Today I returned to my desktop computer after having been away for a couple of days, during which I only used my laptop. Both machines run Ubuntu Linux, currently version 23.04.

The project I was going to work on required R, but for some reason R didn’t start. No error message, only an exit status of 1 and I was back at the shell prompt. Running R --version worked fine, but Rscript failed in the same way as regular R. I tried running R --vanilla, starting R as a different user, nothing helped.

Time to dig deeper. Make sure all Ubuntu packages are up to date. Reinstall the r-base and r-base-core packages, check whether those packages had been recently updated (no), check whether the package versions were identical to the ones on my laptop (where R worked fine). Nothing…

Maybe there is problem with a (missing) dynamic library? In the (distant) past I have had problems with that, so it is worth a shot:

$ ldd /usr/bin/Rscript
	linux-vdso.so.1 (0x00007ffcac7d4000)
	/usr/local/lib/AppProtection/libAppProtection.so (0x00007f01b9e00000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f01b9bfb000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f01ba204000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f01ba1ff000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f01ba22f000)
	libX11.so.6 => /lib/x86_64-linux-gnu/libX11.so.6 (0x00007f01ba0c1000)
	libxcb.so.1 => /lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f01ba095000)
	libXi.so.6 => /lib/x86_64-linux-gnu/libXi.so.6 (0x00007f01ba081000)
	libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f01b9991000)
	libXau.so.6 => /lib/x86_64-linux-gnu/libXau.so.6 (0x00007f01ba07b000)
	libXdmcp.so.6 => /lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f01ba073000)
	libXext.so.6 => /lib/x86_64-linux-gnu/libXext.so.6 (0x00007f01ba05c000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f01b98a8000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f01ba038000)
	libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f01b9893000)
	libmd.so.0 => /lib/x86_64-linux-gnu/libmd.so.0 (0x00007f01ba02b000)

That’s strange: I usually don’t have stuff installed in /usr/local. What is this AppProtection library doing there? And then it hit me: I recently (the last time I had used my desktop PC) had to install Citrix’s ICA client to do some remote desktop work for one of my clients. When I installed that package I was asked something about installing some sort of app protection. I had selected “yes”… With a feature named like that I should have known better…

Anyway, time to see if this shared library was indeed the problem. I moved the /usr/local/lib/AppProtection/ directory out of the way and tried to start R. All was fine and dandy again! Except that even an ls command now gave an error:

$ ls /usr/local/lib/AppProtection
ERROR: ld.so: object '/usr/local/lib/AppProtection/libAppProtection.so' from /etc/ld.so.preload cannot be preloaded (cannot open shared object file): ignored.
ls: cannot access '/usr/local/lib/AppProtection': No such file or directory
ERROR: ld.so: object '/usr/local/lib/AppProtection/libAppProtection.so' from /etc/ld.so.preload cannot be preloaded (cannot open shared object file): ignored.

Apparently (obviously?), something still tried to preload the library, even though it no longer existed. It turns out this was done in the file /etc/ld.so.preload:

/usr/local/lib/AppProtection/libAppProtection.so

Given that this was the only content of that file, I opted to just delete it. Finally, my system is back in a working state.

Conclusion: be more careful when installing stuff from external sources and definitely don’t install anything you don’t really need, like Citrix’s App Protection.

P.S. It turns out this was also the reason why the Mattermost app wasn’t running successfully any more.

Related Images:

Add horizontal scroll bars to the WordPress Hemingway theme

For this blog, I use the Hemingway theme by Anders Norén. I really like it, but while writing a post with some long terminal outputs earlier today, I noticed that the <pre> blocks, in which code and terminal outputs are wrapped (by Org2Blog) get line-wrapped. This makes it difficult for the reader to interpret the blocks, especially when the block contents is e.g. an ASCII art-like table.

So I wanted to see if I could somehow fix this with some CSS. And it turns out you can! In the WordPress admin screen, go to “Appearance”, then “Additional CSS”. There I added the following and clicked on “Publish”:

.post-content pre {
  word-wrap: normal;
  overflow-x: auto;
  white-space: pre;
}

This pice of code overwrites part of the theme’s CSS and makes sure the <pre> blocks get a horizontal scroll bar.

Related Images:

LXD container snapshots, ZFS snapshots and moving containers

Here, we investigate the behaviour of LXD when moving containers between LXD cluster nodes, with a focus on various types of (filesystem) snapshots.

LXD containers can be snapshot by LXD itself, but in case one uses a ZFS storage backend, one can also use a tool like Sanoid to make snapshots of a container’s filesystem. When moving an LXD container from one LXD cluster node to another, one, of course, wants those filesystem snapshots to move along as well. Spoiler: this isn’t always the case.

Let’s create a test container on my home LXD cluster (which uses ZFS as default storage backend), starting on node wiske2:

lxc launch ubuntu:22.04 snapmovetest --target=wiske2

Check the container is running:

lxc list snapmovetest

+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+
|     NAME     |  STATE  |         IPV4          |                   IPV6                    |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+
| snapmovetest | RUNNING | 192.168.10.158 (eth0) | 2a10:3781:782:1:216:3eff:fed5:ef48 (eth0) | CONTAINER | 0         | wiske2   |
+--------------+---------+-----------------------+-------------------------------------------+-----------+-----------+----------+

Now, let’s use LXD to create two snapshots:

lxc snapshot snapmovetest "Test1"
sleep 10
lxc snapshot snapmovetest "Test2"

Check the snapshots have been made:

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

At the ZFS level:

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                 24.7M   192G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

All is fine! Now, let’s move the container to node wiske3:

lxc stop snapmovetest
lxc move snapmovetest snapmovetest --target=wiske3
lxc list snapmovetest

+--------------+---------+------+------+-----------+-----------+----------+
|     NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+------+------+-----------+-----------+----------+
| snapmovetest | STOPPED |      |      | CONTAINER | 2         | wiske3   |
+--------------+---------+------+------+-----------+-----------+----------+

Check the snapshots:

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

At the ZFS level:

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                  749M   202G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

So far so good: snapshots taken with the native LXD toolchain get moved. Now let’s manually create a ZFS snapshot:

zfs snapshot rpool/lxd/containers/snapmovetest@manual_zfs_snap
zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                                USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                   749M   202G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1     60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2     60K      -      748M  -
rpool/lxd/containers/snapmovetest@manual_zfs_snap     0B      -      748M  -

Nove move the container back to node wiske2:

lxc move snapmovetest snapmovetest --target=wiske2
lxc list snapmovetest

+--------------+---------+------+------+-----------+-----------+----------+
|     NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS | LOCATION |
+--------------+---------+------+------+-----------+-----------+----------+
| snapmovetest | STOPPED |      |      | CONTAINER | 2         | wiske2   |
+--------------+---------+------+------+-----------+-----------+----------+

What happened to the snapshots?

lxc info snapmovetest | awk '$1=="Snapshots:" {toprint=1}; {if(toprint==1) {print $0}}'

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| Test1 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+
| Test2 | 2023/03/11 22:22 CET |            | NO       |
+-------+----------------------+------------+----------+

zfs list -rtall rpool/lxd/containers/snapmovetest

NAME                                               USED  AVAIL     REFER  MOUNTPOINT
rpool/lxd/containers/snapmovetest                  749M   191G      748M  legacy
rpool/lxd/containers/snapmovetest@snapshot-Test1    60K      -      748M  -
rpool/lxd/containers/snapmovetest@snapshot-Test2    60K      -      748M  -

Somehow, the ZFS-level snapshot has been removed… I guess this part of the LXD manual should be written in bold (emphasis mine):

LXD assumes that it has full control over the ZFS pool and dataset. Therefore, you should never maintain any datasets or file system entities that are not owned by LXD in a ZFS pool or dataset, because LXD might delete them.

Consequently, in a LXD cluster one shouldn’t use Sanoid to make snapshots ZFS-backed LXD container filesystems. Instead, use LXD’s builtin automatic snapshot capabilities (see the snapshots.expiry and snapshots.schedule options).

Clean up:

lxc delete snapmovetest

Related Images:

Upgrading nodejs to the latest LTS release on Ubuntu 21.10

Today I upgraded the Bash language server (to v3.0.3), after which I noticed that it stopped working. When loading a .bash file, the language server didn’t load and told me to look in the error output for more information. In Emacs, the errors of the Bash language server can be found in the *bash-ls::stderr* buffer, which showed me:

/home/lennart/.emacs.d/.cache/lsp/npm/bash-language-server/lib/node_modules/bash-language-server/node_modules/vscode-jsonrpc/lib/common/linkedMap.js:40
        return this._head?.value;
                          ^

SyntaxError: Unexpected token '.'
    at wrapSafe (internal/modules/cjs/loader.js:915:16)
    at Module._compile (internal/modules/cjs/loader.js:963:27)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Module.require (internal/modules/cjs/loader.js:887:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object.<anonymous> (/home/lennart/.emacs.d/.cache/lsp/npm/bash-language-server/lib/node_modules/bash-language-server/node_modules/vscode-jsonrpc/lib/common/api.js:37:21)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)

I re-ran lsp-install-server, which pointed out that I had nodejs v12.22.5 installed and the language server required v14 or higher.

Time to figure out how to install a newer nodejs version on my Ubuntu 21.10 machine. It turns out that v12 is no longer maintained. The current LTS version of nodejs is v16. Here I found instructions on how to install a given version of nodejs on Ubuntu. For v16, this boils down to running

curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -

The script that this command fetches (and executes as root) is quite elaborate, but in the end it simply creates the file /etc/apt/sources.list.d/nodesource.list, with the following contents:

deb [signed-by=/usr/share/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x impish main
deb-src [signed-by=/usr/share/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x impish main

After that, a simple apt upgrade didn’t suffice. The nodejs upgrade was held back because of a dependency problem. Even an explicit upgrade of the nodejs package didn’t work:

$ sudo apt upgrade nodejs
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies.
 libnode72 : Conflicts: nodejs-legacy
E: Broken packages

So, I resorted to a full apt dist-upgrade, which worked fine. After that, I reopened a Bash script and all was fine.

Related Images:

Getting SMART information from a Seagate Expansion Portable drive

A couple of days ago, I bought myself a 5TB Seagate Expansion Portable drive. This is an 2.5″ external spinning hard disk that connects over USB. In a review on a well-known Dutch website for IT enthusiasts, I read that inside, the drive consists of an ST5000LM000 hard drive and a USB to SATA chip (in contrast to other manufacturers like WD that solder the USB connector directly on the drives circuit board).

After connecting the drive to my computer (that currently runs Ubuntu 21.10), I wanted to see what I could learn about the drive in terms of SMART information. So I tried:

$ sudo smartctl -a /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.0-41-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Read Device Identity failed: scsi error unsupported field in scsi command

A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

Trying the suggested -T option didn’t help. So I played around with the -d option that I had used before trying to connect to hard drives behind RAID controllers. That looked better:

$ sudo smartctl -a /dev/sda -T conservative -d sat,auto
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.0-41-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               Seagate
Product:              Expansion HDD
Revision:             1901
Compliance:           SPC-4
User Capacity:        5.000.981.077.504 bytes [5,00 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is fully provisioned
Logical Unit id:      0x3e41434334313346
Serial number:        00000000NACC413F
Device type:          disk
Local Time is:        Thu May 19 21:54:04 2022 CEST
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

No Self-tests have been logged

The drive reports the correct size, but also says there is not SMART support. In fact, using -d scsi gave identical output. Because there should only be this USB to SATA translation layer I thought that somehow I should be able to get the SMART commands to work. Looking through the smartmontools website, I came across this article that explains the “SAT with UAS” situation. It seems that the high speed UAS driver disables SAT transfers in certain cases. The workaround is to tell the kernel to use the older usb-storage driver instead of the uas driver. With the lsusb command I identified the manufacturer and device ID of the drive:

$ lsusb | grep -i seagate
Bus 004 Device 012: ID 0bc2:2037 Seagate RSS LLC Expansion HDD

Next, I made sure to unmount and disconnect the drive and then instructed the kernel to use the old driver for this device:

$ echo "0x0bc2:0x2037:u" | sudo tee /sys/module/usb_storage/parameters/quirks

and reconnected the drive. I verified in the kernel logs that the usb-storage driver was indeed used:

mei 19 22:08:30 barabas kernel: usb 4-3.3: new SuperSpeed USB device number 12 using xhci_hcd
mei 19 22:08:30 barabas mtp-probe[983206]: checking bus 4, device 12: "/sys/devices/pci0000:00/0000:00:08.1/0000:0c:00.3/usb4/4-3/4-3.3"
mei 19 22:08:30 barabas mtp-probe[983206]: bus: 4, device: 12 was not an MTP device
mei 19 22:08:30 barabas kernel: usb 4-3.3: New USB device found, idVendor=0bc2, idProduct=2037, bcdDevice=19.01
mei 19 22:08:30 barabas kernel: usb 4-3.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
mei 19 22:08:30 barabas kernel: usb 4-3.3: Product: Expansion HDD
mei 19 22:08:30 barabas kernel: usb 4-3.3: Manufacturer: Seagate
mei 19 22:08:30 barabas kernel: usb 4-3.3: SerialNumber: 00000000NACC413F
mei 19 22:08:30 barabas kernel: usb 4-3.3: UAS is ignored for this device, using usb-storage instead
mei 19 22:08:30 barabas kernel: usb-storage 4-3.3:1.0: USB Mass Storage device detected
mei 19 22:08:30 barabas kernel: usb-storage 4-3.3:1.0: Quirks match for vid 0bc2 pid 2037: 800000
mei 19 22:08:30 barabas kernel: scsi host6: usb-storage 4-3.3:1.0

Notice the “UAS is ignored” message. And lo and behold, smartctl now works and shows all relevant information:

$ sudo smartctl -a /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.13.0-41-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 2.5 5400
Device Model:     ST5000LM000-2U8170
Serial Number:    WCJ6AG24
LU WWN Device Id: 5 000c50 0e0939684
Firmware Version: 0001
User Capacity:    5.000.981.078.016 bytes [5,00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu May 19 22:30:52 2022 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever
					been run.
Total time to complete Offline
data collection: 		(    0) seconds.
Offline data collection
capabilities: 			 (0x71) SMART execute Offline immediate.
					No Auto Offline data collection support.
					Suspend Offline collection upon new
					command.
					No Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 ( 827) minutes.
Conveyance self-test routine
recommended polling time: 	 (   2) minutes.
SCT capabilities: 	       (0x7035)	SCT Status supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   067   065   006    Pre-fail  Always       -       5367808
  3 Spin_Up_Time            0x0003   100   100   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       10
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   100   253   045    Pre-fail  Always       -       3765
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       0 (86 255 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       9
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   069   069   040    Old_age   Always       -       31 (Min/Max 29/31)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       2
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       23
194 Temperature_Celsius     0x0022   031   040   000    Old_age   Always       -       31 (0 19 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       0 (7 164 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       1723222
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       3644586

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Now that I know how to do this, let’s undo the “use usb-storage driver instead of the uas driver” (alternatively, a reboot should also work, but who wants that?):

$ echo "" | sudo tee /sys/module/usb_storage/parameters/quirks

I will use this drive as a backup drive while I am travelling, so the aim of this post is not only to inform you as my reader(s), but also to remind my future self of how I did this. Now I only need to remember to check the SMART values every once in a while :-).

Related Images:

Opening Emacs (-client) windows with full screen height

I usually want my Emacs windows, including those opened via emacsclient, to be opened using the full screen height. It turns out that Emacs has a command line option for this:

emacs --fullheight

Now, for emacsclient this option doesn’t exist, so how can we solve this? Looking around the web, I found this answer on Emacs StackExchange that explained how to do this from within Emacs (see also the fullscreen option on the corresponding page in the Emacs manual). Combined with this answer on StackOverflow that shows how to use the -F/--frame-parameters option of emacsclient I managed to open a full-height Emacs client window:

emacsclient --create-frame --frame-parameters="'(fullscreen . fullheight)"

Most of the time I open new Emacs (-client) windows using the button in Ubuntu’s/Gnome shell’s dock. So how do we incorporate the aforementioned options in the relevant .desktop file? Before explaining how I did this, I have to mention that I don’t use a pre-packaged version of Emacs. As of this writing Emacs v27.2 is the latest official release, but I have been compiling Emacs from source for about two years now. About a month ago I compiled Emacs from the emacs-28 branch 1, which contains what will become Emacs v28. In this branch the .desktop files used by Gnome for its list of applications (including the icons/launchers that end up in the Gnome shell dock) have received some love. For example, the emacsclient.desktop file now opens a regular Emacs at first launch, but subsequent clicks on the icon will launch an Emacs client window. Right-clicking on the icon shows an option called “New Instance”, which will do as it says: launch a new Emacs instance. Well done Emacs maintainers! This perfectly fits my workflow, where most of the time I want to open a client window, but sometimes want to open a new Emacs instance (e.g. when I don’t want to clutter my regular workspaces).

So, getting back to the fullheight topic, editing the emacsclient.desktop file seemed like the way to go. Given that I compile Emacs from source and do not install it system-wide (I used the --prefix=/home/$USER/.local option when running ./configure), the .desktop files can be found in ~/.local/share/applications. This is the contents of the emacsclient.desktop file before I edited it:

[Desktop Entry]
Name=Emacs (Client)
GenericName=Text Editor
Comment=Edit text
MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++;
Exec=sh -c "if [ -n \\"\\$*\\" ]; then exec emacsclient --alternate-editor= --display=\\"\\$DISPLAY\\" \\"\\$@\\"; else exec emacsclient --alternate-editor= --create-frame; fi" %F
Icon=emacs
Type=Application
Terminal=false
Categories=Development;TextEditor;
StartupNotify=true
StartupWMClass=Emacs
Keywords=emacsclient;
Actions=new-window;new-instance;

[Desktop Action new-window]
Name=New Window
Exec=/home/lennart/.local/bin/emacsclient --alternate-editor= --create-frame %F

[Desktop Action new-instance]
Name=New Instance
Exec=emacs %F

After my edits, this is the contents of the file:

[Desktop Entry]
Name=Emacs (Client)
GenericName=Text Editor
Comment=Edit text
MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++;
Exec=sh -c "if [ -n \\"\\$*\\" ]; then exec emacsclient --alternate-editor=  --frame-parameters=\\"'(fullscreen . fullheight)\\" --display=\\"\\$DISPLAY\\" \\"\\$@\\"; else exec emacsclient --alternate-editor= --create-frame --frame-parameters=\\"'(fullscreen . fullheight)\\"; fi" %F
Icon=emacs
Type=Application
Terminal=false
Categories=Development;TextEditor;
StartupNotify=true
StartupWMClass=Emacs
Keywords=emacsclient;
Actions=new-window;new-instance;

[Desktop Action new-window]
Name=New Window
Exec=/home/lennart/.local/bin/emacsclient --alternate-editor= --create-frame --frame-parameters="'(fullscreen . fullheight)" %F

[Desktop Action new-instance]
Name=New Instance
Exec=emacs --fullheight %F

The diff is:

@@ -3,7 +3,7 @@
 GenericName=Text Editor
 Comment=Edit text
 MimeType=text/english;text/plain;text/x-makefile;text/x-c++hdr;text/x-c++src;text/x-chdr;text/x-csrc;text/x-java;text/x-moc;text/x-pascal;text/x-tcl;text/x-tex;application/x-shellscript;text/x-c;text/x-c++;
-Exec=sh -c "if [ -n \\"\\$*\\" ]; then exec emacsclient --alternate-editor= --display=\\"\\$DISPLAY\\" \\"\\$@\\"; else exec emacsclient --alternate-editor= --create-frame; fi" %F
+Exec=sh -c "if [ -n \\"\\$*\\" ]; then exec emacsclient --alternate-editor=  --frame-parameters=\\"'(fullscreen . fullheight)\\" --display=\\"\\$DISPLAY\\" \\"\\$@\\"; else exec emacsclient --alternate-editor= --create-frame --frame-parameters=\\"'(fullscreen . fullheight)\\"; fi" %F
 Icon=emacs
 Type=Application
 Terminal=false
@@ -15,8 +15,8 @@

 [Desktop Action new-window]
 Name=New Window
-Exec=/home/lennart/.local/bin/emacsclient --alternate-editor= --create-frame %F
+Exec=/home/lennart/.local/bin/emacsclient --alternate-editor= --create-frame --frame-parameters="'(fullscreen . fullheight)" %F

 [Desktop Action new-instance]
 Name=New Instance
-Exec=emacs %F
+Exec=emacs --fullheight %F

One final note: for those who don’t use the Emacs client, there is also the emacs.desktop file, with the same icon. You can find out which one is in your Gnome shell dock by running:

gsettings get org.gnome.shell favorite-apps

which returns a list like this:

['org.gnome.Terminal.desktop', 'thunderbird.desktop', 'firefox.desktop', 'emacsclient.desktop']

If you’d like to edit this variable manually, you can use either dconf-editor to edit org/gnome/shell/favorite-apps, or set it directly:

gsettings set org.gnome.shell favorite-apps "['org.gnome.Terminal.desktop', 'thunderbird.desktop', 'firefox.desktop', 'emacs.desktop']"

Note, the order of the list matters!

Footnotes:

1

The commit I used was d86b2e59c.

Related Images:

Use a script to convert Office files to PDF via Nautilus’ right-click menu

Recently, I bought a reMarkable 2, a tablet-like device with an e-ink screen that allows me to replace real paper with digital note taking, while keeping the hand-written aspect of writing notes. The device also allows me to read and annotate PDFs in a comfortable way. Given that I use RMfuse to ‘mount’ the reMarkable cloud on my computer, I normally drag-n-drop the files I want to read in GNOME Files (formerly Nautilus, GNOME’s file manager) from their location on my computer to the mounted cloud.

This works really well so far. However, I also regularly receive files in the MS Office .docx format. Often I need to make substantial changes in these documents, which I do on my laptop or computer. But sometimes I only need to read them or only put my signature at the bottom. For these cases I would open the .docx file in LibreOffice, convert it to PDF and copy it to my reMarkable. In order to speed this up, I thought it would be nice to have some way where I right-click on a .docx file in GNOME Files/Nautilus and then convert it to PDF automatically, after which I can drag-n-drop the PDF file to the mounted reMarkable cloud.

So the question was: how can I add an item to the right-click menu of Nautilus, which runs a script when I click on it. After looking around on the internet, this turned out to be quite easy. It turns out that scripts placed in ~/.local/share/nautilus/scripts/ end up in the Nautilus right-click menu under the Scripts submenu.

To do the actual conversion to PDF, I created the following script:

#!/bin/bash
# This script converts the selected file to PDF using LibreOffice
# For general instructions on how to use Nautilus scripts, see
# https://help.ubuntu.com/community/NautilusScriptsHowto
#
# Save this script in ~/.local/share/nautilus/scripts/ and make it
# executable.

IFS_BAK=$IFS
IFS="
"

for SelectedFile in ${NAUTILUS_SCRIPT_SELECTED_FILE_PATHS}; do
    soffice \
        --nodefault \
        --nolockcheck \
        --nologo \
        --norestore \
        --nofirststartwizard \
        --convert-to pdf "${SelectedFile}"
done

IFS=$IFS_BAK

Because I wanted to be able to select multiple files, some having spaces in their names, I had to make sure the space character wasn’t used to split the NAUTILUS_SCRIPT_SELECTED_FILE_PATHS variable. That is why I temporarily replace the IFS variable with a newline.

I have only tried this on .docx files so far, but I guess it would work on presentations and spreadsheets as well.

Related Images:

« Older posts

© 2024 Lennart's weblog

Theme by Anders NorénUp ↑