Fun With volshell

When triaging a collection of memory images, I often find myself running multiple Volatility plugins on each image. Typically I do this by shell script, calling each plugin individually and saving the output in files. The problem with this approach is that Volatility has to re-parse the memory image each time my script calls a new plugin. This adds a lot of overhead and time. I started wondering if I could leverage volshell to run multiple plugins so that I wouldn’t have to pay the startup cost each time.

volshell Basics

volshell is an interactive shell environment for exploring a memory image. Explaining all the features of volshell would fill a book, so we’re just going to focus on the basics of starting up volshell and running plugins.

Starting volshell is straightforward. Specify a memory image with “-f” and the OS it comes from with “-w“, “-m“, or “-l” (Windows, MacOS, Linux, respectively). If we don’t want the progress meter as it’s ingesting the memory image, we can add “-q” (“quiet” mode).

$ volshell -f avml.lime -l -q
Volshell (Volatility 3 Framework) 2.27.1
Readline imported successfully

Call help() to see available functions

Volshell mode : Linux
Current Layer : layer_name
Current Symbol Table : symbol_table_name1
Current Kernel Name : kernel

(layer_name) >>>

As the startup text suggests, help is available at any time by running the help() function:

(layer_name) >>> help()

Methods:
...
* dpo, display_plugin_output
Displays the output for a particular plugin (with keyword arguments)
...

volshell methods generally have both long and abbreviated forms. For example, we’ll be using the display_plugin_output() method to run plugins. But rather than type that long string each time, we can just use dpo() instead.

For a simple example, let’s run the linux.ip.Addr plugin via volshell:

(layer_name) >>> from volatility3.plugins.linux import ip
(layer_name) >>> dpo(ip.Addr, kernel = self.config['kernel'])

NetNS Index Interface MAC Promiscuous IP Prefix Scope Type State

4026531840 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026531840 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026531840 2 enp0s3 08:00:27:3a:05:32 False 192.168.4.22 22 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fdb0:fa27:86c5:1:19cf:7bad:b8a6:c5d7 64 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fdb0:fa27:86c5:1:a00:27ff:fe3a:532 64 global UP
4026531840 2 enp0s3 08:00:27:3a:05:32 False fe80::a00:27ff:fe3a:532 64 link UP
4026532287 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532287 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026532345 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532345 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
4026532403 1 lo 00:00:00:00:00:00 False 127.0.0.1 8 host UNKNOWN
4026532403 1 lo 00:00:00:00:00:00 False ::1 128 host UNKNOWN
(layer_name) >>>

First we need to import the Volatility class that contains the plugin we want to invoke. The basic syntax here is “from volatility3.plugins.<os> import <class>“. “<os>” will be “windows“, “mac“, or “linux“. Since we’re running a Linux plugin, “<class>” will be the word that appears after “linux.” in the plugin name. For example, if were trying to run “linux.elfs.Elfs“, then “<class>” is “elfs“.

We use the dpo() method to actually run the plugin and get the output. If we were invoking the plugin on the command line, we would specify “linux.ip.Addr” as the plugin name. But here in Linux volshell, we can leave off the “linux.“. After the plugin name always specify “kernel = self.config['kernel']” to satisfy the dpo() method’s syntax.

If you want to run another plugin, just repeat the pattern. Import the appropriate Volatility class and run dpo() as before:

(layer_name) >>> from volatility3.plugins.linux import pstree
(layer_name) >>> dpo(pstree.PsTree, kernel = self.config['kernel'])

OFFSET (V) PID TID PPID COMM

0x8c7cc0281980 1 1 0 systemd
* 0x8c7cc0d79980 310 310 1 systemd-journal
* 0x8c7cc8268000 357 357 1 systemd-timesyn
* 0x8c7cc81a6600 365 365 1 systemd-udevd
* 0x8c7cc67e0000 687 687 1 avahi-daemon
** 0x8c7cc9986600 718 718 687 avahi-daemon
...

What’s great about this approach is that the memory image was already parsed when volshell started up. So each plugin runs very quickly.

Changing Output Modes

In many cases, it’s better for my workflow to get the plugin output in JSON format rather than the standard text output. I’d be embarrassed to admit how long the following little recipe took me to figure out, so let’s just get to the code:

(layer_name) >>> from volatility3.cli import text_renderer
(layer_name) >>> from volatility3.plugins.linux import psaux
(layer_name) >>> treegrid = gt(psaux.PsAux, kernel = self.config['kernel'])
(layer_name) >>> treegrid.populate()
(layer_name) >>> rt(treegrid,text_renderer.JsonLinesRenderer())

{"ARGS": "/sbin/init", "COMM": "systemd", "PID": 1, "PPID": 0, "__children": []}
{"ARGS": "[kthreadd]", "COMM": "kthreadd", "PID": 2, "PPID": 0, "__children": []}
{"ARGS": "[pool_workqueue_]", "COMM": "pool_workqueue_", "PID": 3, "PPID": 2, "__children": []}
{"ARGS": "[kworker/R-kvfre]", "COMM": "kworker/R-kvfre", "PID": 4, "PPID": 2, "__children": []}
{"ARGS": "[kworker/R-rcu_g]", "COMM": "kworker/R-rcu_g", "PID": 5, "PPID": 2, "__children": []}
...

First we’re importing the text_renderer class from volatility3.cli. This class contains methods for outputting various text formats, like JsonLinesRenderer() for single-line JSON format. Other options include JsonRenderer() for “pretty-printed” JSON output, or CSVRenderer() for comma-separated values formatting.

Next we import the class for Volatility plugin we want to invoke, just as before. But rather than calling dpo(), we create a new treegrid object with generate_treegrid() (abbreviated “gt()“). The arguments to gt() are the same as those for dpo().

gt() merely creates the treegrid object. We still have to call the treegrid.populate() method to load data into the object. Once we have populated the treegrid with data, we can invoke render_treegrid() (“rt()“) to output the data with our chosen text renderer.

Stumbling Towards Automation

Clearly this approach requires a lot of redundant typing. Automating the task of running multiple plugins through volshell is clearly the next step. My ptt.sh script has an initial attempt at this. At some point, I’d like to turn this idea into a standalone script outside of ptt.sh.

jq For Forensics

jq is a tremendously useful tool for dealing with JSON data. But the documentation that exists seems to be targeted at developers parsing deeply nested JSON structures to transform them into other JSON structures. In my DFIR role, I typically deal with streams of fairly simple JSON records–usually some sort of log– that I need to transform into structured text, such as comma-separated (CSV) or tab-separated (TSV) output. I’ve spent a lot of time running through reference manuals and endless Stack Overflow postings to get to a reasonable level with jq. I wanted to share some of the things I’ve learned along the way.

Start With The Basics

At it’s simplest, jq is an excellent JSON pretty printer:

$ jq . journal.json
{
"_MACHINE_ID": "0f2f13b9dce0451591ae0dc418f6c96f",
"_RUNTIME_SCOPE": "system",
"_HOSTNAME": "vbox",
"_SOURCE_BOOTTIME_TIMESTAMP": "0",
"MESSAGE": "Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)",
"__MONOTONIC_TIMESTAMP": "6400064",
"_SOURCE_MONOTONIC_TIMESTAMP": "0",
"_BOOT_ID": "2a5a598d4f6142c7b7719eed38c1a2b9",
"SYSLOG_IDENTIFIER": "kernel",
"_TRANSPORT": "kernel",
"PRIORITY": "5",
"SYSLOG_FACILITY": "0",
"__CURSOR": "s=0a047604dca842218e0807bc796d4cb7;i=1;b=2a5a598d4f6142c7b7719eed
38c1a2b9;m=61a840;t=64dc728142e95;x=852824913ddff90e",
"__REALTIME_TIMESTAMP": "1774367626505877"
}
{
"_MACHINE_ID": "0f2f13b9dce0451591ae0dc418f6c96f",
"MESSAGE": "Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet",

...

The basic syntax here is “jq <script> <jsonfile> ...“, where <script> is some sort of translation script in jq‘s own particular scripting language. The script “.” is essentially a null transformation that simply tells jq to output whatever it sees in its input <jsonfile>. The default output style for jq is the pretty-printed style you see above.

Some of you will recognize the data above as Systemd journal entries. Normally we would work with the Systemd journal via the journalctl command. But exported journal data from one of my lab systems is a good example set for showing you some useful jq tips and tricks that you can apply to any sort of exported logging stream.

Other Output Modes

Suppose we just wanted to output the “MESSAGE” field from each record. Just specify the field you want to output with a leading “.“:

$ jq .MESSAGE journal.json
"Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

Because the value of the MESSAGE field is a string, jq outputs each message surrounded by double quotes. If you don’t want the quoting, use the “-r” option for raw mode output:

$ jq -r .MESSAGE journal.json
Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

Suppose we wanted to output multiple fields as columns of structured text. jq includes support for both “@csv” and “@tsv” output modes:

$ jq -r '[.__REALTIME_TIMESTAMP, ._HOSTNAME, .MESSAGE] | @csv' journal.json
"1774367626505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"1774367626505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

jq transformation scripts use a pipelining syntax. Here we’re sending the fields we want to output into the “@csv” formatting tool. “@csv” wants its inputs as a JSON array, so we create an array on the fly simply by enclosing the fields we want to output with square brackets (“[..., ..., ...]“). The “@csv” output method automatically quotes each field and handles escaping any double quotes that might be included.

If you want other delimiters besides the traditional commas or tabs, jq can also output arbitrary text:

$ jq -r '"\(.__REALTIME_TIMESTAMP)|\(._HOSTNAME)|\(.MESSAGE)"' journal.json
1774367626505877|vbox|Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
1774367626505925|vbox|Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

Use double quotes ("...") to enclose your output template. Use “\(.fieldname)” to output the value of specific fields. Anything else in your template is output as literal text. Here I’m outputting pipe-delimited text with the same three fields as in our CSV example above.

Note that our output template can use the typical escape sequences like “\t” for tabs. So another way to produce tab-delimited text would be:

$ jq -r '"\(.__REALTIME_TIMESTAMP)\t\(._HOSTNAME)\t\(.MESSAGE)"' journal.json
1774367626505877 vbox Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)
1774367626505925 vbox Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet
...

However, it’s almost certainly easier to use '[..., ..., ...] | @tsv' for this.

Transforming Data With Builtin Operators

jq includes a wide variety of builtin operators for data transformation and math. For example, suppose we wanted to format those __REALTIME_TIMESTAMP fields in the Systemd journal into human-readable strings:

$ head -1 journal.json | jq -r '(.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T")'
2026-03-24 15:53:46

There’s a lot going on here, so let’s break it down a bit at a time. __REALTIME_TIMESTAMP is a string– if you look at the pretty-printed output above, the values are displayed in double quotes meaning they are string type values. Ultimately we want to feed the __REALTIME_TIMESTAMP value into strftime() to produce formatted text, but strftime() wants numeric input. The first thing to do then is to convert the string into a number with “tonumber“. The jq piping syntax is how we express this transformation.

Our next problem is that __REALTIME_TIMESTAMP is in microseconds, but strftime() wants good old Unix epoch seconds. So we do some math with the traditional “/” operator for division. This actually converts our value into a decimal number (“1774367626.505877“), but that’s good enough for strftime(). Finally we pipeline the number we calculated into the strftime() function. We give strftime() an appropriate format string to get the output we want.

This works great, but we’re throwing away the microseconds information. What if we wanted to display that as part of the timestamp? Time to introduce some more useful string operations:

$ head -1 journal.json | jq -r '((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) + 
(.__REALTIME_TIMESTAMP | .[-6:])'

2026-03-24 15:53:46.505877

Looking at the back part of our expression on the second line above, we are using jq‘s slicing operation “.[start:end]“. Since we are using a negative offset for the start value, we are counting backwards from the end of the string six characters. With no end value specified, it outputs the rest of the string from that point.

Like many other scripting languages, jq supports string concatenation with the addition operator (“+“). Here we are adding the formatted string output from strftime() and the microseconds value we sliced out of the string. Note the the strftime() format has been updated to output a literal “.” between the formatted text and the microseconds.

Suppose we wanted to include the human-readable timestamp we just created instead of the raw epoch microseconds for our “@csv” output. The trick is to take our jq code for producing human readable timestamps and drop it into our “[...] | @csv” pipeline in place of the __REALTIME_TIMESTAMP field:

$ jq -r '[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) + (.__REALTIME_TIMESTAMP | .[-6:]), ._HOSTNAME, .MESSAGE] | @csv' journal.json
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

Scripting With jq

Obviously that jq expression is pretty horrible to type on the command line. You can always take any jq script and put it into a text file and then run that script on your data with the “-f” option:

$ jq -r -f csv-journal.jq journal.json
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

In this instance, our csv-journal.jq file is the jq recipe from our command line example, but without the single quotes. Since jq doesn’t care about whitespace in scripts, we can format our recipe with newlines and indentation to make it more readable:

$ cat csv-journal.jq 
[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) +
(.__REALTIME_TIMESTAMP | .[-6:]),
._HOSTNAME, .MESSAGE] | @csv

On Linux systems you can even use jq in a “bang path” at the top of the script so it automatically gets invoked as the interpreter:

$ cat csv-journal.jq
#!/usr/bin/jq -rf

[((.__REALTIME_TIMESTAMP | tonumber) / 1000000 | strftime("%F %T.")) +
(.__REALTIME_TIMESTAMP | .[-6:]),
._HOSTNAME, .MESSAGE] | @csv

Note that the new interpreter path at the top of the script includes the “-rf” options for raw output (“-r“) and interpreting the rest of the file as a script (“-f“).

Once we have the interpreter path at the top of the script, we can just cat our JSON data into the script without invoking jq directly:

$ chmod +x csv-journal.jq 
$ cat journal.json | ./csv-journal.jq
"2026-03-24 15:53:46.505877","vbox","Linux version 6.12.74+deb13+1-amd64 (debian-kernel@lists.debian.org) (x86_64-linux-gnu-gcc-14 (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44) #1 SMP PREEMPT_DYNAMIC Debian 6.12.74-2 (2026-03-08)"
"2026-03-24 15:53:46.505925","vbox","Command line: BOOT_IMAGE=/boot/vmlinuz-6.12.74+deb13+1-amd64 root=UUID=d6cf7c18-1df5-4f29-a6f8-d5c4947c1df7 ro quiet"
...

This might make things easier for less-technical users.

Selecting Records

When working with streams of records, it’s typical to want to only operate on certain records. For example, suppose we only wanted to see log messages from the “sudo” command. In the Systemd journal, these messages have the “SYSLOG_IDENTIFIER” field set to “sudo“:

$ jq -r 'select(.SYSLOG_IDENTIFIER == "sudo") | .MESSAGE' journal.json
worker : user NOT in sudoers ; TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/2 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
pam_unix(sudo:session): session closed for user root
worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
pam_unix(sudo:session): session closed for user root
worker : TTY=pts/1 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
pam_unix(sudo:session): session opened for user root(uid=0) by worker(uid=1000)
worker : TTY=pts/3 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
...

The new magic is jq‘s select() operator up at the front of that pipeline. If the conditional you give to select() evaluates to true, then the record you have matched gets passed down for processing by the rest of the pipeline. If not, then that record is skipped.

Logical operators (“and“, “or“, “not“) and parentheses are allowed. And you can do pattern matching with PCRE-like expressions. For example, the really interesting lines in Sudo logs are the ones that show the command being invoked (“COMMAND=“):

$ jq -r 'select(.SYSLOG_IDENTIFIER == "sudo" and (.MESSAGE | test("COMMAND="))) | .MESSAGE' journal.json
worker : user NOT in sudoers ; TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/2 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/0 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/1 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
worker : TTY=pts/3 ; PWD=/home/worker ; USER=root ; COMMAND=/bin/bash
...

For pattern matching, just pipeline the field you want to match against into the test() operator. Here I’m matching the literal string “COMMAND=” against the MESSAGE field. The pattern match is joined with our original selector for “sudo” in the SYSLOG_IDENTIFIER field using a logical “and“.

Here’s another example showing a useful regex when dealing with SSH logs, just to give you a flavor of things you can do with regular expression matching:

$ jq -r 'select(._COMM == "sshd" and 
(.MESSAGE | test("^((Accepted|Failed) .* for|Invalid user) "))) | .MESSAGE' journal.json

Invalid user mary from 192.168.10.31 port 55746
Failed password for invalid user mary from 192.168.10.31 port 55746 ssh2
Accepted password for hal from 192.168.4.22 port 42310 ssh2
...

Enough For Now

Hopefully this is enough to get you started writing your own basic jq scripts. As with many things, the rest you pick up as you practice and get frustrated. The jq reference manual is useful for checking the syntax of different built-in operators, but I often find the examples more frustrating than helpful. Searching Stack Overflow can often yield more useful results.

Feel free to drop your questions into the comments, or reach out to me via social media or email. Maybe your questions will turn this single blog article into a series!

Linux Notes: ls and Timestamps

There’s an old riddle in Unix circles: “Name a letter that is not an option for the ls command”. The advent of the GNU version of ls has only made this more difficult to answer. Even if you’re a Unix/Linux power user, you’ve probably only memorized a small handful of the available options.

For example, I have “ls -lArt” burned into my brain from my Sys Admin days. “-l” for detailed listing, “-A” to show hidden files and directories (but not the “.” and “..” links like “-a“), sort by last modified time with “-t“, and “-r” to reverse the sort so the newest files appear right above your next shell prompt.

$ ls -lArt
total 1288
-rw-r--r-- 1 root root 9 Aug 7 2006 host.conf
-rw-r--r-- 1 root root 433 Aug 23 2020 apg.conf
-rw-r--r-- 1 root root 26 Dec 20 2020 libao.conf
-rw-r--r-- 1 root root 12813 Mar 27 2021 services
-rw-r--r-- 1 root root 769 Apr 10 2021 profile
-rw-r--r-- 1 root root 449 Nov 29 2021 mailcap.order
-rw-r--r-- 1 root root 119 Jan 10 2022 catdocrc
...
-rw-r--r-- 1 root root 52536 Feb 23 11:44 mailcap
-rw-r--r-- 1 root root 108979 Mar 2 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 Mar 3 18:08 resolv.conf
drwxr-xr-x 5 root lp 4096 Mar 5 19:52 cups

You’ll note that the timestamps are displayed in two different formats. The oldest files show “month day year”, while the newer files show “month day hh:mm”. The default for ls is that files more than six months old display year information.

Personally I prefer consistent ISO-style timestamps with “--time-style=long-iso“:

$ ls -lArt --time-style=long-iso
total 1288
-rw-r--r-- 1 root root 9 2006-08-07 13:14 host.conf
-rw-r--r-- 1 root root 433 2020-08-23 10:52 apg.conf
-rw-r--r-- 1 root root 26 2020-12-20 11:21 libao.conf
-rw-r--r-- 1 root root 12813 2021-03-27 18:32 services
-rw-r--r-- 1 root root 769 2021-04-10 16:00 profile
-rw-r--r-- 1 root root 449 2021-11-29 08:07 mailcap.order
-rw-r--r-- 1 root root 119 2022-01-10 19:08 catdocrc
...
-rw-r--r-- 1 root root 52536 2026-02-23 11:44 mailcap
-rw-r--r-- 1 root root 108979 2026-03-02 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 2026-03-03 18:08 resolv.conf
drwxr-xr-x 5 root lp 4096 2026-03-05 19:52 cups

While “-t” sorts on last modified time by default, other options allow you to sort and display other timestamps. For example, “-u” sorts on and displays last access time. “-u” is hardly memorable as last access time, but remember “-a” is used for something else.

It’s a pain trying to remember the one letter options for the other timestamps– and note there isn’t even a short option for sorting/displaying on file creation time. So I just use “--time=” to pick the timestamp I want:

$ ls -lArt --time=birth --time-style=long-iso
total 1288
-rw-r--r-- 1 root root 1013 2025-04-10 10:27 fstab
drwxr-xr-x 2 root root 4096 2025-04-10 10:27 ImageMagick-6
drwxr-xr-x 2 root root 4096 2025-04-10 10:27 GNUstep
...
-rw-r--r-- 1 root root 142 2026-02-23 11:41 shells
-rw-r--r-- 1 root root 52536 2026-02-23 11:44 mailcap
-rw-r--r-- 1 root root 108979 2026-03-02 09:24 ld.so.cache
-rw-r--r-- 1 root root 75 2026-03-03 18:08 resolv.conf

Here we’re sorting on and displaying file creation times (“--time=birth“). You can use “--time=atime” or “--time=ctime” for the other timestamps.

If this command line seems long and unwieldy, remember that you can create aliases for commands in your .bashrc or other startup files:

alias ls='ls --color=auto --time-style=long-iso'
alias lb='ls -lArt --time=birth'

With normal ls commands, I’ll get colored output always, and “long-iso” dates whenever I use “-l“. I can use lb whenever I want file creation times. Note that alias definitions “stack”– the “lb” alias will get the color and time-style options from my basic “ls” alias, so I don’t need to include the “--time-style” option in the “lb” alias.