Linux Investigation (Part 3)

Please read Part 1 and Part 2 for additional background

We’ve actually done quite well reconstructing the major points of our intrusion scenario, but there are definitely some lingering questions. Let’s see if we can dive deeper into some of these artifacts.

LD_PRELOAD Rootkit Or Not?

It would be nice to positively identify libymv.so.3 as an LD_PRELOAD rootkit, rather than just referring to it as the “suspicious library”. Fortunately we have a memory image and can use Volatility to extract the library. We can use linux.elfs to dump all the objects associated with a PID. We believe that the SSH daemon was explicitly restarted to pick up libymv.so.3, so we’ll dump that process. As we saw in Part 2, that PID is 937.

(venv) $ mkdir dump-elfs-pid-937
(venv) $ vol -q -f memory_dump/avml.lime -o dump-elfs-pid-937 linux.elfs --pid 937 --dump
Volatility 3 Framework 2.27.1

PID Process Start End File Path File Output

937 sshd 0x55e3f52a9000 0x55e3f52b2000 /usr/sbin/sshd pid.937.sshd.0x55e3f52a9000.dmp
937 sshd 0x7f7da1093000 0x7f7da1098000 /usr/lib/x86_64-linux-gnu/libzstd.so.1.5.7 pid.937.sshd.0x7f7da1093000.dmp
937 sshd 0x7f7da115d000 0x7f7da1160000 /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.14.0 pid.937.sshd.0x7f7da115d000.dmp
937 sshd 0x7f7da120c000 0x7f7da1234000 /usr/lib/x86_64-linux-gnu/libc.so.6 pid.937.sshd.0x7f7da120c000.dmp
937 sshd 0x7f7da1400000 0x7f7da14f7000 /usr/lib/x86_64-linux-gnu/libcrypto.so.3 pid.937.sshd.0x7f7da1400000.dmp
937 sshd 0x7f7da1a3c000 0x7f7da1a3f000 /usr/lib/x86_64-linux-gnu/libz.so.1.3.1 pid.937.sshd.0x7f7da1a3c000.dmp
937 sshd 0x7f7da1a5e000 0x7f7da1a65000 /usr/lib/x86_64-linux-gnu/libselinux.so.1 pid.937.sshd.0x7f7da1a5e000.dmp
937 sshd 0x7f7da1aa5000 0x7f7da1aa7000 /usr/lib/x86_64-linux-gnu/libymv.so.3 pid.937.sshd.0x7f7da1aa5000.dmp
937 sshd 0x7f7da1ab3000 0x7f7da1ab5000 [vdso] pid.937.sshd.0x7f7da1ab3000.dmp
937 sshd 0x7f7da1ab5000 0x7f7da1ab6000 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 pid.937.sshd.0x7f7da1ab5000.dmp
937 sshd 0x7f7da1ade000 0x7f7da1ae9000 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 pid.937.sshd.0x7f7da1ade000.dmp

libymv.so.3 was extracted to dump-elfs-pid-937/pid.937.sshd.0x7f7da1aa5000.dmp. We could go full tilt at this and load it up for static analysis. But in the middle of an incident, I’m much more likely to do a first pass using “strings” and an internet search engine. Here are some of the more interesting strings I see in this sample

lpe_drop_shell
timebomb
backconnect
Enjoy the shell!
/tmp/silly.txt

Running those terms through my favorite search engine, my first hit was this useful list of rootkit IOCs. Apparently these strings are indicators for the Father rootkit.

If you look at the documentation for Father, it allows process, file, and directory hiding via a hard-coded GID. In our case this is apparently GID 7823 as we saw in Part 2. Father also has an accept() hook backdoor which is activated by to connecting to any network service from a hard-coded port. Based on the linux.sockstat output, this appears to be port 48411 in our case.

Father also has a PAM hook that steals user passwords and saves them to /tmp/silly.txt. UAC captured this file for us:

(venv) $ cat \[root\]/tmp/silly.txt 
100:1:password:worker

worker” is the password for the worker account. The rest of the values here are hard-coded in the Father source code and are meaningless.

So even without static analysis I’m willing to state with high confidence that libymv.so.3 is the Father LD_PRELOAD rootkit. And since I implanted it into the system, I can also state authoritatively that this is exactly what the library is.

Is top Really XMRig?

Let’s try a different approach to discover whether our suspicious “top” process is really XMRig as the command history we extracted in Part 2 suggests. linux.proc.Maps allows us to dump the memory space of a process.

(venv) $ mkdir dump-mem-pid-977
(venv) $ vol -q -f memory_dump/avml.lime -o dump-mem-pid-977 linux.proc.Maps --pid 977 --dump
Volatility 3 Framework 2.27.1

PID Process Start End Flags PgOff Major Minor Inode File Path File output

977 top 0x400000 0x401000 r-- 0x0 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0x400000-0x401000.dmp
977 top 0x401000 0xa5f000 r-x 0x1000 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0x401000-0xa5f000.dmp
977 top 0xa5f000 0xc40000 r-- 0x65f000 0 26 7 /dev/shm/kit/top (deleted) pid.977.vma.0xa5f000-0xc40000.dmp
[...]
(venv) $ grep -rlF xmrig dump-mem-pid-977/
dump-mem-pid-977/pid.977.vma.0xa5f000-0xc40000.dmp
(venv) $ strings -a dump-mem-pid-977/pid.977.vma.0xa5f000-0xc40000.dmp
[...]
Usage: xmrig [OPTIONS]
Network:
-o, --url=URL URL of mining server
-a, --algo=ALGO mining algorithm https://xmrig.com/docs/algorithms
--coin=COIN specify coin instead of algorithm
[...]

Finding the help text for the xmrig program in memory is conclusive enough for me. This is not a terribly surprising result, but it is nice to have additional confirmation.

Note that I tried a similar approach with linux.proc.Maps to try and recover the “config” file from the outbound SSH process (PID 975), but was unsuccessful. But while searching the memory strings for “:3333“, you can see some of the chatter from the XMRig process down the SSH tunnel.

[...]
4a747a5e140b7eb34933347c5154cb3d671bedc2e83b3a5e16f1603a4b660000
127.0.0.1:3333
method":"submit","params":{"id":"c1dcc2a9","job_id":"77","nonce":"ef000000","result":"982e4063bea92330be26b808c7d21d3fc349bc06f0c232fd673b7a827ab30000","algo":"rx/0"}}
"cn/rto","cn/rwz","cn/zls","cn/double","cn/ccx","cn-lite/1","cn-heavy/0","cn-heavy/tube","cn-heavy/xhv","cn-pico","cn-pico/tlo","cn/upx2","rx/0","rx/wow","rx/arq","rx/graft","rx/sfx","rx/yada","argon2/chukwa","argon2/chukwav2","argon2/ninja","ghostrider"]}}
[...]

Timeline Analysis

I’m passionate about timeline analysis, and will often use it in the early stages of a case to find indicators. However, we’ve been very successful with the other evidence sources that UAC provided and timeline analysis hasn’t been a priority. But now we can use timeline analysis to fill in any details we might have missed.

First we need to create the timeline from the body file provided by UAC with the help of the Sleuthkit’s mactime tool (“mactime -d -y -b bodyfile/bodyfile.txt >bodyfile/timeline.csv“). Alternatively, you could use my ptt.sh script which creates a timeline that merges file system inform with security log information including user logins, Sudo commands, etc.

After loading the timeline into our CSV viewer of choice, we can jump to 2026-03-24 23:22:19– the time of the “worker” login for the session where the Father rootkit was implanted. As usual, there is a lot of noise in the timeline, but the timeline generally confirms events we have discovered already.

Recall from Part 1 that the logs showed us a brief SSH session at 23:23:48. This session was not logged in /var/log/wtmp, indicating that it most likely was a single command or scp session that was not allocated a PTY and did not spawn an interactive shell. The timeline shows that at 23:23:48 the last access time on the “/run/shm -> /dev/shm” symlink was updated. Does this mean that the SSH connection at 23:23:48 was how the /dev/shm/kit directory was staged? This is certainly a plausible explanation, but not conclusive.

We know that the Father rootkit exfiltrates passwords in the file /tmp/silly.txt. The timeline shows us that this file was created at 23:34:32. This is when the SOC logged in as user worker to collect system data with UAC. This is just further confirmation that the Father rootkit was operational on the system.

Wrapping Up

At this point, we have answers for the important questions for the scenario:

  1. Is the system compromised? Definitely yes!
  2. How did the attackers gain access to the system? They logged in as user “worker“, using the account password “worker“. This password was easily guessable, or it may have been disclosed or stolen. User “worker” had unlimited Sudo access, so privilege escalation was trivial.
  3. Why is there unencrypted traffic on port 22/tcp? The attacker installed a rootkit that creates a backdoor in any networked service on the system, giving any user connecting from source port 48411/tcp a root shell on the system.
  4. What is consuming CPU on the system? Process ID 975 is the XMRig cryptocurrency miner running under the executable name “top“. This process is connecting out through an SSH tunnel to 192.168.5.95. This system is now in scope and must be investigated.
  5. Why can’t the SOC see what is happening on the system? The rootkit installed by the attacker is hiding multiple processes from the attacker’s backdoor session, including the cryptocurrency miner.

Here is our final timeline of important events during the incident:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364 [logs]
23:23:34 User "worker" uses sudo to execute root shell [logs]
23:23:48 Command-only/scp as user "worker" from jump host port 55504 [logs]
23:23:48 atime update on /dev/shm symlink, possible rootkit staging [bodyfile]
23:24:51 Father rootkit installed as /usr/lib/x86_64-linux-gnu/libymv.so.3 [bodyfile]
23:25:09 /etc/ld.so.preload created, points to .../libymv.so.3 [chkrootkit]
23:25:19 SSH daemon restarted [logs]
23:26:07 Unencrypted connect from 192.168.4.35 port 48411 via Father rootkit hook [memory]
23:26:22 python3 execution to promote raw shell [memory]
23:26:22 Hidden bash process started from python pty.spawn() [memory]
23:27:16 User "worker" SSH session from jump host port 48364 ends [logs]
23:27:51 /dev/shm/kit/xmrig renamed to "top" [memory]
23:28:17 ssh to 192.168.5.95 with tunnel on 3333/tcp [memory]
23:29:09 "top" process (renamed xmrig) started, comms via SSH tunnel [memory]
23:29:19 /dev/shm/kit removed [memory]
23:34:32 SOC logs in to start collecting data with UAC [logs]
23:34:32 Father rootkit stores "worker" password in /tmp/silly.txt [bodyfile]

Those of you who conducted your own investigation may have been thrown off by earlier artifacts left behind by my aborted attempts to create the scenario data. For example, there are login failures as user “worker” that could be construed as a brute force attack against this account, system crashes, and errors with libymv.so.3. My intention was that the scenario started with the login at 23:22:19, but kudos to those of you who found the earlier artifacts and invented plausible explanations for them.

Some submissions noted the fact that the kernel taint warning was triggered and speculated about a possible kernel rootkit. But only one submission actually ran down the source of the taint warning and realized it was due to the VirtualBox assistant and not enemy action. This is serious investigative dedication! Bravo!

One last part of this series is yet to come. I will be walking through what I actually did to create the scenario activity so that you can compare your answers with what really happened. In the meantime, don’t forget to check out the reports from the contest winners.

Linux Investigation (Part 2)

Please read the previous installment of this investigation for additional background.

Having identified a potential LD_PRELOAD rootkit, I’m very curious to discover what the memory dump from the system can tell us. Note that before you can begin analyzing the memory for yourself you will need to (a) install Volatility, and (b) install a Linux profile that will work for this memory dump in .../volatility3/symbols/linux.

Process Information

One approach for finding suspicious processes is to look at the process hierarchy with linux.pstree. The Linux process hierarchy is usually rather flat, so interactive sessions tend to stand out:

(venv) $ vol -q -f memory_dump/avml.lime linux.pstree
[...]
* 0x8c7cc67e1980 937 937 1 sshd
** 0x8c7cc8278000 939 939 937 sh
*** 0x8c7cc67e4c80 940 940 939 python3
**** 0x8c7cc6011980 941 941 940 bash
***** 0x8c7cc6014c80 975 975 941 ssh
** 0x8c7cc81b1980 1005 1005 937 sshd-session
*** 0x8c7cc08b8000 1047 1047 1005 sshd-session
**** 0x8c7cc8354c80 1064 1064 1047 bash
***** 0x8c7cc351b300 1076 1076 1064 sudo
****** 0x8c7cc3859980 1078 1078 1076 sudo
******* 0x8c7cc039cc80 1079 1079 1078 bash
******** 0x8c7da0131980 119319 119319 1079 avml
[...]

The second hierarchy starting with PID 1005 is what SSH sessions normally look like: two sshd-session processes (privilege separation), then the login shell. The double sudo is an interesting wrinkle, but apparently an artifact of this user doing “sudo -s” to start a root shell rather than just “sudo bash“. Then we have the bash shell itself, where the user is running avml to grab the memory dump we are currently reviewing. This is more or less as expected.

The process hierarchy starting with PID 939, however, is just plain weird. The shell process with PID 939 is spawned directly out of the master SSH daemon, PID 937. This suggests some sort of exploit against the SSH daemon itself. Next comes a Python process (PID 940). Could this be the Python command from the original SOC report? Certainly PID 940 spawned a bash shell (PID 941), which matches what the SOC told us. That bash shell ran “ssh“– the SSH client program. What is going on here?

We can get more detail on these processes from the linux.psaux plugin:

(venv) $ vol -q -f memory_dump/avml.lime linux.psaux
[...]
939 937 sh /bin/sh
940 939 python3 python3 -c import pty; pty.spawn("/bin/bash")
941 940 bash /bin/bash
[...]
975 941 ssh ssh -F config ymv
[...]

That’s definitely the Python command line the SOC reported to us. Apparently whatever is happening in the PID 939 shell is not happening inside of an encrypted session. This also tends to point towards some sort of pre-encryption and therefore pre-authentication compromise of the master SSH daemon. Recall from our log analysis in the last installment that the /etc/ld.so.preload file containing the path of the suspected rootkit library was created immediately before the SSH daemon was restarted. Are we looking at rootkit functionality for the SSH daemon compromise?

It’s also worth considering that ssh command line (PID 975). “-F config” means read client options from a file called “config“, but where was that file dropped to disk? Also, the SOC and the system admins for the machine have no idea what the host “ymv” is. It does not resolve within this network. Best guess is that the hostname “ymv” refers to configuration in the “config” file, wherever that ended up.

Command History From Memory

UAC didn’t find command history on disk, but the attacker’s bash shell was still active at the time the memory image was taken. So that command history should be in the memory dump.

(venv) $ vol -q -f memory_dump/avml.lime linux.bash --pid 941
Volatility 3 Framework 2.27.1

PID Process CommandTime Command

941 bash 2026-03-24 23:26:26.000000 UTC rest
941 bash 2026-03-24 23:26:30.000000 UTC reset
941 bash 2026-03-24 23:26:44.000000 UTC echo $SHELL
941 bash 2026-03-24 23:26:54.000000 UTC id
941 bash 2026-03-24 23:27:25.000000 UTC cd /dev/shm/kit
941 bash 2026-03-24 23:27:26.000000 UTC ls
941 bash 2026-03-24 23:27:51.000000 UTC mv xmrig top
941 bash 2026-03-24 23:27:59.000000 UTC export PATH=.:$PATH
941 bash 2026-03-24 23:28:17.000000 UTC ssh -F config ymv
941 bash 2026-03-24 23:28:28.000000 UTC bg
941 bash 2026-03-24 23:29:09.000000 UTC top -o 127.0.0.1:3333 -B
941 bash 2026-03-24 23:29:15.000000 UTC cd ..
941 bash 2026-03-24 23:29:19.000000 UTC rm -rf kit
941 bash 2026-03-24 23:29:24.000000 UTC ps -ef | grep top
941 bash 2026-03-24 23:29:36.000000 UTC ls

This shell history is extremely useful. After what is apparently an initial typo, the user executes a “reset” command, which is part of the typical recipe for elevating a raw shell to fully interactive via the Python pty.spawn() method. Our user then checks which shell they are in (“echo $SHELL“) and what user they are running as (“id“). This is classic post-exploit behavior: a typical user in a normal interactive session would not need to run these commands.

The user then changes to a directory named /dev/shm/kit. This is not a typical directory on this system, so where did it come from? The fact that it was staged in /dev/shm–a memory-based file system–is suspicious. This would be a good place for an attacker to stage files that they did not want to write to the system’s local disk.

Inside this directory was apparently a file named “xmrig“. XMRig is a popular Monero cryptocurrency miner. The user renames this file to “top” and later executes this “top” program from the /dev/shm/kit directory. “export PATH=.:$PATH” adds the current working directory to the front of the search path, followed later by “top -o 127.0.0.1:3333 -B” to execute the program. Note that the command line arguments to this “top” program match typical xmrig arguments– “-o 127.0.0.1:3333” to specify the mining infrastructure to connect to, and “-B” to run in the background as a daemon.

After starting the renamed xmrig, we see the user remove /dev/shm/kit (“cd ..“, “rm -rf kit“). This will not impact the running processes that were started here, but will make it more difficult for inexperienced investigators to find these files. We also see the user checking that their “top” program is still running or perhaps that it is invisible (“ps -ef | grep top“), and making sure that /dev/shm/kit is gone (“ls“).

But what is happening on localhost 3333/tcp? In between the “mv” command to rename the binary and executing the renamed xmrig as “top” we see “ssh -F config ymv” (later set to run in the background with “bg“) which we also saw in the linux.psaux output. Note that this command history implies that the “config” file was in /dev/shm/kit, now deleted.

It seems reasonable to assume that 127.0.0.1:3333 is an SSH tunnel. But can we find artifacts in memory to prove that?

Network Details

The linux.sockstat plugin should give us some insight into the network behavior on the system:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | grep -F 3333 | sort -u
4026531840 libuv-worker 977 979 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 980 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 981 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 libuv-worker 977 982 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 ssh 975 975 4 0x8c7cc404a900 AF_INET6 STREAM TCP ::1 3333 :: 0 LISTEN -
4026531840 ssh 975 975 5 0x8c7cc4043900 AF_INET STREAM TCP 127.0.0.1 3333 0.0.0.0 0 LISTEN -
4026531840 ssh 975 975 6 0x8c7cc405e880 AF_INET STREAM TCP 127.0.0.1 3333 127.0.0.1 59182 ESTABLISHED -
4026531840 top 977 977 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 978 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 987 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 988 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 989 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -
4026531840 top 977 990 17 0x8c7cc405c280 AF_INET STREAM TCP 127.0.0.1 59182 127.0.0.1 3333 ESTABLISHED -

The suspicious SSH process that was started (PID 975) is definitely listening on 127.0.0.1:3333. There is no “-L” option showing on the command line, so we presume the tunnel configuration is in the “config” file mentioned on the command line (“-F config“).

We can also see the “top” process, which is the renamed xmrig binary, actively talking on this port. This is consistent with the “-o 127.0.0.1:3333” option we saw in the command history. XMRig uses libuv for thread management, which is where the libuv-worker processes come from. Note that the top and libuv-worker entries share the same PID (977 in the third column of output) but different thread IDs (fourth column of output above).

But where is the SSH process connecting to? Let’s check the linux.sockstat output again:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | awk '$3 == "975"'
4026531840 ssh 975 975 3 0x8c7cc405cc00 AF_INET STREAM TCP 192.168.4.22 33440 192.168.5.95 22 ESTABLISHED -
4026531840 ssh 975 975 4 0x8c7cc404a900 AF_INET6 STREAM TCP ::1 3333 :: 0 LISTEN -
4026531840 ssh 975 975 5 0x8c7cc4043900 AF_INET STREAM TCP 127.0.0.1 3333 0.0.0.0 0 LISTEN -
4026531840 ssh 975 975 6 0x8c7cc405e880 AF_INET STREAM TCP 127.0.0.1 3333 127.0.0.1 59182 ESTABLISHED -

The outbound connection is to 192.158.5.95. We understood from the initial SOC report that all access to the system we are investigating is supposed to go through a jump host at 192.168.4.35, so this is a new system we have not heard of. The IP address is internal, and we will need to investigate this system to find out whether it too has been compromised. The scope of our investigation is growing!

We can also use linux.sockstat to investigate where our mysterious unencrypted SSH traffic is originating from. Here I am focusing in on only the activity related to the initial “sh” process (PID 939) that was created:

(venv) $ vol -q -f memory_dump/avml.lime linux.sockstat | awk '$3 == "939"'
4026531840 sh 939 939 0 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 1 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 2 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -
4026531840 sh 939 939 8 0x8c7cc4059300 AF_INET STREAM TCP 192.168.4.22 22 192.168.4.35 48411 ESTABLISHED -

It appears that this connection originated from the jump host system, 192.168.4.35 (source port 48411).

Detailed Process Information

The linux.pslist plugin will give us process start times as well as UID and GID info for all of our suspicious processes. The linux.pslist output is quite busy, so allow me to summarize the important pieces of info:

Process   PID  PPID  UID  GID    Started (UTC)
sh 939 937 0 7823 2026-03-24 23:26:07
python3 940 939 0 7823 2026-03-24 23:26:22
bash 941 940 0 7823 2026-03-24 23:26:22
ssh 975 941 0 7823 2026-03-24 23:28:32
top 977 1 0 7823 2026-03-24 23:29:24

One item that jumps out is that all of the processes are running as root (UID zero), but with the GID 7823. No other processes in the output are using this GID and the GID does not appear in the /etc/group file captured by UAC ([root]/etc/group). Rootkits will often use a group ID value to mark processes, files, and directories that should be hidden by the rootkit. That may be what is happening here.

Also note that the PPID of the “top” process is 1, which is systemd. This is an artifact of the “-B” option that was used to invoke the program. This option tells the process to run in the background like any other daemon process. In doing so the “top” process disassociates itself from the parent shell that spawned it, inheriting PPID 1. This is also why the “top” process did not appear in linux.pstree process hierarchy while the “ssh” command did.

Note that the start times for the “ssh” and “top” processes reported by linux.pslist differ from the timestamps from the command history. The linux.pslist times are approximately 15 seconds later than the corresponding command history timestamps. I have no explanation for this discrepancy, but will stick with the timestamps from the command history for consistency.

Status Check And Next Steps

Our timeline is filling in nicely. I’ve added a note at the end of each item to remind us where the information comes from:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364 [logs]
23:23:34 User "worker" uses sudo to execute root shell [logs]
23:23:48 Command-only/scp as user "worker" from jump host port 55504 [logs]
23:24:51 /usr/lib/x86_64-linux-gnu/libymv.so.3 created [bodyfile]
23:25:09 /etc/ld.so.preload created, points to .../libymv.so.3 [chkrootkit]
23:25:19 SSH daemon restarted [logs]
23:26:07 Unencrypted connect from 192.168.4.35 port 48411 spawns hidden sh (PID 939) [memory]
23:26:22 python3 execution to promote raw shell [memory]
23:26:22 Hidden bash process started from python pty.spawn() [memory]
23:27:16 User "worker" SSH session from jump host port 48364 ends [logs]
23:27:51 /dev/shm/kit/xmrig renamed to "top" [memory]
23:28:17 ssh to 192.168.5.95 with tunnel on 3333/tcp [memory]
23:29:09 "top" process (renamed xmrig) started, comms via SSH tunnel [memory]
23:29:19 /dev/shm/kit removed [memory]

From the timeline it appears that the suspicious library was planted during the original “worker” login session that started at 23:22:19. But once the hidden session was established and promoted via pty.spawn(), the legitimate login closed down. The hidden session was responsible for starting the SSH tunnel to 192.168.5.95 and running the renamed XMRig.

But there are still so many questions. How did the /dev/shm/kit directory and the suspicious library get onto the system in the first place and when? Can we recover the any of the suspect files– “libymv.so.3“, “top“, and the “config” file used for the SSH connection? Is “libymv.so.3” really an LD_PRELOAD rootkit as suspected? What else did the attacker accomplish on the system? More investigation in the next installment!

Linux Investigation (Part 1)

I recently posted a Linux scenario that I had mocked up and asked for people to submit write-ups for judging. The winners have now been announced. Congratulations to all who participated!

I also wanted to provide an analysis of my own. Obviously, I know exactly what happened because I did everything. But I’m going to approach this investigation as if I were coming in cold, without any prior knowledge.

The scenario starts with an alert from the SOC containing two important pieces of information:

  • Unencrypted traffic on port 22/tcp, specifically the string “python3 -c 'import pty; pty.spawn("/bin/bash")'
  • Heavy CPU usage but no process can be seen consuming the CPU

We have a UAC collection from the machine, including a full memory dump.

Initial Triage

“Heavy CPU usage but no process can be seen consuming the CPU” sounds a lot like a rootkit hiding processes to me, but let’s just confirm the SOC finding first. UAC collects the output from “top -b -n1” (live_response/process/top_-b_-n1.txt), and here’s the first part of that file:

top - 19:38:21 up 15 min,  1 user,  load average: 4.38, 3.44, 1.81
Tasks: 132 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu(s): 97.7 us, 2.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 7947.4 total, 5206.4 free, 2780.6 used, 194.6 buff/cache
MiB Swap: 1101.0 total, 1097.7 free, 3.3 used. 5166.7 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 23812 14824 10756 S 0.0 0.2 0:00.70 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pool_wo+

[...]

Looking at line #3, the CPU is indeed maxed out. The process listing below the header information should be sorted by CPU utilization, but we see nothing that remotely accounts for the amount of CPU being consumed. Seems like the SOC made a good read here.

UAC has some modules that look for common rootkit indicators, so we look in the “chkrootkit” directory in the UAC output and hit pay dirt (for those of you playing along at home, the bodystat.sh script is available from my GitHub):

$ ls chkrootkit/
etc_ld_so_preload.txt stat_etc_ld_so_preload.txt
$ cat chkrootkit/etc_ld_so_preload.txt
/lib/x86_64-linux-gnu/libymv.so.3
$ cat chkrootkit/stat_etc_ld_so_preload.txt
0|/etc/ld.so.preload|837578|-rw-r--r--|0|0|34|1774394719|1774394709|1774394709|1774394709
$ cat chkrootkit/stat_etc_ld_so_preload.txt | bodystat.sh
File: /etc/ld.so.preload
Size: 34 UID: 0 GID: 0 Inode: 837578
Access: 2026-03-24 23:25:19
Modify: 2026-03-24 23:25:09
Change: 2026-03-24 23:25:09
Birth: 2026-03-24 23:25:09

$ grep -F lib/x86_64-linux-gnu/libymv.so.3 bodyfile/bodyfile.txt | bodystat.sh
File: /usr/lib/x86_64-linux-gnu/libymv.so.3
Size: 24024 UID: 0 GID: 0 Inode: 715378
Access: 2026-03-24 23:25:19
Modify: 2026-03-24 23:24:51
Change: 2026-03-24 23:24:51
Birth: 2026-03-24 23:24:51

We have a suspicious library in /etc/ld.so.preload containing the library path /usr/lib/x86_64-linux-gnu/libymv.so.3. This libymv.so.3 file was created at 2026-03-24 23:24:51 UTC and /etc/ld.so.preload at 23:25:09.

There are many directions our investigation could go from this point, but I got curious to see if we could tie those file creation times to a particular login session on the machine. A quick look at live_response/system/last_-i.txt in the UAC data shows the following:

worker   pts/0        192.168.4.35     Tue Mar 24 19:34 - still logged in
worker pts/0 192.168.4.35 Tue Mar 24 19:22 - 19:27 (00:04)
reboot system boot 6.12.74+deb13+1- Tue Mar 24 19:21 - still running
[...]

There’s a time discrepancy here. If we assume that the session starting at 19:34 is where the UAC collection was made, this predates the arrival of the suspicious library by four hours. I suspect a local time zone issue.

$ ls -l \[root\]/etc/localtime
lrwxrwxrwx 1 hal hal 36 Mar 24 15:48 '[root]/etc/localtime' -> /usr/share/zoneinfo/America/New_York

The “last” output will be in the default time zone for the machine, which is apparently US/Eastern time. That’s four hours earlier than UTC at this time of year.

With that in mind, the creation times on our suspicious files line up nicely with the four minute session by user “worker” from 23:22 – 23:27 UTC (represented in the output above as 19:22 – 19:27). Unfortunately, UAC did not capture a .bash_history file for either the “worker” user or the “root” account. Anti-forensics is a possibility worth noting, but for now we don’t have any useful command history.

Looking at the security logs should provide more details about that login session. The only logs available are the Systemd journal, so it’s time to work some magic with journalctl.

$ journalctl -D \[root\]/var/log/journal/ -q -o short-iso --facility=auth,authpriv | grep -vE '(pam_unix|systemd-logind)'
[...]
2026-03-24T23:22:19+0000 vbox sshd-session[834]: Accepted password for worker from 192.168.4.35 port 48364 ssh2
2026-03-24T23:23:34+0000 vbox sudo[910]: worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
2026-03-24T23:23:48+0000 vbox sshd-session[914]: Accepted password for worker from 192.168.4.35 port 55504 ssh2
2026-03-24T23:23:48+0000 vbox sshd-session[923]: Received disconnect from 192.168.4.35 port 55504:11: disconnected by user
2026-03-24T23:23:48+0000 vbox sshd-session[923]: Disconnected from user worker 192.168.4.35 port 55504
2026-03-24T23:25:19+0000 vbox sshd[801]: Received signal 15; terminating.
2026-03-24T23:25:19+0000 vbox sshd[937]: Server listening on 0.0.0.0 port 22.
2026-03-24T23:25:19+0000 vbox sshd[937]: Server listening on :: port 22.
2026-03-24T23:27:16+0000 vbox sshd-session[834]: syslogin_perform_logout: logout() returned an error
2026-03-24T23:27:16+0000 vbox sshd-session[873]: Received disconnect from 192.168.4.35 port 48364:11: disconnected by user
2026-03-24T23:27:16+0000 vbox sshd-session[873]: Disconnected from user worker 192.168.4.35 port 48364
[...]

The “worker” user logs in (with a password) at 23:22:19, and uses sudo to get a root shell at 23:23:34.

But at 23:23:48, we see a second SSH session for “worker” that did not appear in the “last” output. Typically this means that the session did not spawn an interactive shell and was not allocated a PTY. That would indicate that the session only ran a single command specified by the remote user on their command line, or was possibly an scp. This idea is supported by the fact that the session connected and disconnected in the same second.

At 23:25:19 we see the SSH server restarted. Curiouser and curiouser.

What We Know So Far

At this point we’re starting to build a timeline of the incident:

23:22:19    User "worker" logs in with password from jump host (192.168.4.35) port 48364
23:23:34 User "worker" uses sudo to execute /bin/bash (root shell)
23:23:48 Command-only/scp as user "worker" from jump host (port 55504)
23:24:51 /usr/lib/x86_64-linux-gnu/libymv.so.3 created
23:25:09 /etc/ld.so.preload created (points to .../libymv.so.3)
23:25:19 SSH daemon restarted
23:27:16 User "worker" SSH session from jump host ends (source port 48364)

With everything laid out like this, it becomes obvious that the /etc/ld.so.preload file was created immediately before the SSH daemon was restarted. Since the only person operating on the system at the time was our suspected attacker, it is reasonable to assume that the SSH daemon was restarted so that it would pick up the suspicious libymv.so.3 library.

libymv.so.3 certainly has all the indications of being an LD_PRELOAD type rootkit. Memory analysis is a good path for further investigation, so we will pick up there in our next installment.

Linux Forensic Scenario

Let’s try something a little different for today’s blog post!

I’ve been working on ideas for a major update on my Linux forensics class, including new lab scenarios. I recently threw together a rough draft of one of my scenario ideas: built a machine, planted some malware on it, and then used UAC to capture forensic data from the system. I was pleased with the results, and thought I would share them with the larger community.

And then I thought, why not turn it into a bit of a contest? For the moment I haven’t decided on any prizes other than bragging rights, but you never know. I have decided that the deadline for submissions for judging will be April 15th– tax day here in the USA.

The Scenario

You received an escalation from your SOC. They received an alert from their NMS about suspicious traffic to one of the Linux workers in the development group’s CI/CD pipeline. The alert was for unencrypted traffic on port 22/tcp, specifically the string “python3 -c 'import pty; pty.spawn("/bin/bash")'” which triggered the alert for “reverse shell promotion” in the NMS. They note that the system is showing signs of heavy CPU usage but that they don’t see any process(es) that account for this. Following their SOP, they acquired data from the system using UAC and have escalated to you as on-call for the internal IR/Threat team.

Other information about the system:

  • There is a single shared account on the system called “worker“. It has full Sudo privileges with the NOPASSWD option set.
  • All network access to the box is through a jump host at IP 192.168.4.35.
  • The UAC collection is uac-vbox-linux-20260324234043.tar.gz

Additional Comments

I threw this scenario together in a matter of hours, so when you look at the timeline of the system you will see that it got built and then compromised very quickly. For the final scenario I will doubtless do a more complete job running fake workloads for some time before the “attack” actually happens.

Similarly, you’ll probably discover that there is no significant network infrastructure around the compromised system. The “jump host” is really just another host in my lab environment that I was operating from.

But I still think there’s plenty of interesting artifacts to find in this scenario. I’m leaving things deliberately open-ended because I want to see what people come up with. But the goal would be to at least account for the issues raised by the SOC: why is there unencrypted traffic on 22/tcp, why is the system burning CPU, and why can’t the SOC see what is going on? Is the system compromised? When and how did that happen?

Submissions

Submissions for judging must be received no later than 23:59 UTC on 2026-04-15. I will accept submissions in .docx, PDF, or text. You may email your submissions to hrpomeranz@gmail.com. Please try to put something like “Linux Forensic Scenario Submission” in the Subject: line to make my life easier.

Depending on the number of submissions I get, I may need more folks to help with the judging. If you’re not planning to compete but would like to help judge, please drop me a line at the email address above. I’ll let you know if I need the help once I count the number (and length) of the submissions.

Happy forensicating! Have fun!