Linux LKM Persistence

Back in August, Ruben Groenewoud posted two detailed articles on Linux persistence mechanisms and then followed that up with a testing/simulation tool called PANIX that implements many of these persistence mechanisms. Ruben’s work was, in turn, influenced by a series of articles by Pepe Berba and work by Eder Ignacio. Eder’s February article on persistence with udev rules seems particularly prescient after Stroz/AON reported in August on a long-running campaign using udev rules for persistence. I highly recommend all of this work, and frankly I’m including these links so I personally have an easy place to go find them whenever I need them.

In general, all of this work focuses on using persistence mechanisms for running programs in user space. For example, PANIX sets up a simple reverse shell by default (though the actual payload can be customized) and the “sedexp” campaign described by Stroz/AON used udev rules to trigger a custom malware executable.

Reading all of this material got my evil mind working, and got me thinking about how I might handle persistence if I was working with a Linux loadable kernel module (LKM) type rootkit. Certainly I could use any of the user space persistence mechanisms in PANIX that run with root privilege (or at least have CAP_SYS_MODULE capability) to call modprobe or insmod to load my evil kernel module. But what about other Linux mechanisms for specifically loading kernel modules at boot time?

Hiks Gerganov has written a useful article summarizing how to load Linux modules at boot time. If you want to be traditional, you can always put the name of the module you want to load into /etc/modules. But that seems a little too obvious, so instead we are going to use the more flexible systemd-modules-load service to get our evil kernel module installed.

systemd-modules-load looks in multiple directories for configuration files specifying modules to load, including /etc/modules-load.d, /usr/lib/modules-load.d, and /usr/local/lib/modules-load.d. systemd-modules-load also looks in /run/modules-load.d, but /run is typically a tmpfs style file system that does not persist across reboots. Configuration file names must end with “.conf” and simply contain the names of the modules to load, one name per line.

For my examples, I’m going to use the Diamorphine LKM rootkit. Diamorphine started out as a proof of concept rootkit, but a Diamorphine variant has recently been found in the wild. Diamorphine allows you to choose a “magic string” at compile time– any file or directory name that starts with the magic string will automatically be hidden by the rootkit once the rootkit is loaded into the kernel. In my examples I am using the magic string “zaq123edcx“.

First we need to copy the Diamorphine kernel module, typically compiled as diamorphine.ko, into a directory under /usr/lib/modules where it can be found by the modprobe command invoked by systemd-modules-load:

# cp diamorphine.ko /usr/lib/modules/$(uname -r)/kernel/drivers/block/zaq123edcx-diamorphine.ko
# depmod

Note that the directory under /usr/lib/modules is kernel version specific. You can put your evil module anywhere under /usr/lib/modules/*/kernel that you like. Notice that by using the magic string in the file name, we are relying on the rootkit itself to hide the module. Of course, if the victim machine receives a kernel update then your Diamorphine module in the older kernel directory will no longer be loaded and your evil plots could end up being exposed.

The depmod step is necessary to update the /usr/lib/modules/*/modules.dep and /usr/lib/modules/*/modules.dep.bin files. Until these files are updated, modprobe will be unable to locate your kernel module. Unfortunately, depmod puts the path name of your evil module into both of the modules.dep* files. So you will probably want to choose a less obvious name (and magic string) than the one I am using here.

The only other step needed is to create a configuration file for systemd-modules-load:

# echo zaq123edcx-diamorphine >/usr/lib/modules-load.d/zaq123edcx-evil.conf

The configuration file is just a single line– whatever name you copied the evil module to under /usr/lib/modules, but without the “.ko” extension. Here again we name the configuration file with the Diamorphine magic string so the file will be hidden once the rootkit is loaded.

That’s all the configuration you need to do. Load the rootkit manually by running “modprobe zaq123edcx-diamorphine” and rest easy in the knowledge that the rootkit will load automatically whenever the system reboots.

Finding the Evil

What artifacts are created by these changes? The mtime on the /usr/lib/modules-load.d directory and the directory where you installed the rootkit module will be updated. Aside from putting the name of your evil module into the modules.dep* files, the depmod command updates the mtime on several other files under /usr/lib/modules/*:

/usr/lib/modules/.../modules.alias
/usr/lib/modules/.../modules.alias.bin
/usr/lib/modules/.../modules.builtin.alias.bin
/usr/lib/modules/.../modules.builtin.bin
/usr/lib/modules/.../modules.dep
/usr/lib/modules/.../modules.dep.bin
/usr/lib/modules/.../modules.devname
/usr/lib/modules/.../modules.softdep
/usr/lib/modules/.../modules.symbols
/usr/lib/modules/.../modules.symbols.bin

Timestomping these files and directories could make things more difficult for hunters.

But loading the rootkit is also likely to “taint” the kernel. You can try looking at the dmesg output for taint warnings:

# dmesg | grep taint
[ 8.390098] diamorphine: loading out-of-tree module taints kernel.
[ 8.390112] diamorphine: module verification failed: signature and/or required key missing - tainting kernel

However, these log messages can be removed by the attacker or simply disappear due to the system’s normal log rotation (if the machine has been running long enough). So you should also look at /proc/sys/kernel/tainted:

# cat /proc/sys/kernel/tainted
12288

Any non-zero value means that the kernel is tainted. To interpret the value, here is a trick based on an idea in the kernel.org document I referenced above:

# taintval=$(cat /proc/sys/kernel/tainted)
# for i in {0..18}; do [[ $(($taintval>>$i & 1)) -eq 1 ]] && echo $i; done
12
13

Referring to the kernel.org document, bit 12 being set means an “out of tree” (externally built) module was loaded. Bit 13 means the module was unsigned. Notice that these flags correspond to the log messages found in the dmesg output above.

While this is a useful bit of command-line kung fu, I thought it might be useful to have in a more portable format and with more verbose output. So I present to you chktaint.sh:

$ chktaint.sh
externally-built (“out-of-tree”) module was loaded
unsigned module was loaded

By default chktaint.sh reads the value from /proc/sys/kernel/tainted on the live system. But in many cases you may be looking at captured evidence offline. So chktaint.sh also allows you to specify an alternate file path (“chktaint.sh /path/to/evidence/file“) or simply a raw numeric value from /proc/sys/kernel/tainted (“chktaint.sh 12288“).

The persistence mechanism(s) deployed by the attacker are often the best way to detect whether or not a system is compromised. If the attacker is using an LKM rootkit, checking /proc/sys/kernel/tainted is often a good first step in determining if you have a problem. This can be combined with tools like chkproc (find hidden processes) and chkdirs (find hidden directories) from the chkrootkit project.

“You Caught Me In An Introspective Moment”

I recently was given a survey to fill out by an organization I do training for. I suppose it’s a pretty predictable set of questions about who I am and how I got into the industry, and advice I have for people who are just starting out. But it caught me at just the right moment and I ended up going into some depth. So if you’re looking for a bit more about me and my journey, and maybe a little bit of life advice, read on!

What is your Name and Title?

My name is Hal Pomeranz, and I’m a “lone eagle”– an independent consultant running my own business.

Titles are a little weird when you are a one-man shop like I am. Officially I’m “President”, “CEO”, and a host of other titles. But it seems a bit grandiose to claim to be CEO of just little old me. “Consultant” or “Principal Consultant” seems a bit closer to the truth.

Tell me about what you do in that occupation?

They always say that running your own business is like working two jobs. The boring part of my business is the “business stuff”—contracts, invoicing, collections, taxes, insurance, etc. Frankly I try to automate or outsource as much of that nonsense as possible.

As far as technical work, my current practice is centered around Digital Forensics and Incident response—helping companies that have had a security incident figure out what happened and get fully operational again. But there are many different aspects to that general description. For example, right now I’m helping one of my clients proactively improve their detection capability to help spot incidents as early as possible.

I also create and teach training courses to help people learn to do some of what I do.

I’ve been diversifying my practice by taking on occasional Expert Witness work, acting as a technical expert and weighing in with my opinion on various court matters. Many of these cases have centered around tech support scams that target unsophisticated computer users—particularly the elderly.

Do you have any certifications or degrees?

I have a Bachelors in Math with a minor in Computer Science. I tried going back to grad school for my Masters at one point, but working in the industry was so much more fun!

I earned a raft of SANS/GIAC certifications because they had a policy that you had to be certified in any class you taught for them—including incidentally the course that I authored.

How do certifications help you out in the industry?

I’ve been working in IT for almost forty years at this point, so for me personally my experience counts for much more than any certification. But when you are first starting out, I understand the feeling that having certifications can help you pass through HR filters and generally make you stand out from your peers.

If you will forgive a bit of editorializing, I generally consider certifications to be a tax on our industry. The difficulty is that many employers lack the in-house expertise to differentiate qualified from unqualified candidates. They fall back on certifications as a CYA maneuver, “Well it’s a shame that candidate didn’t work out, but we did our due diligence and made sure they had the correct certifications.” I don’t know how to solve this problem.

Why did you choose these certs and degrees?

I was interested in computers from a very young age, but when I went to college in the mid-1980s, Computer Science was still not widely available as a major outside of pure tech schools. I went to college fully intending to study Electrical Engineering, which was one path into Computer Science. Then I found out that EE was a five-year degree program with essentially no room for electives. I decided a Math degree would be easier. My first college math class was Discrete Mathematics and that experience made me realize that an Applied Math degree supplemented with as many CS classes as I could take was pretty much exactly what I wanted to do.

What got you interested in IT/Cybersecurity?

I was always fascinated by computers. When I was 11 years old, I bought a used TRS-80 from a friend of the family using my paper route money. Just before I started high school, I bought one of the original IBM PCs—again with money I’d saved up from delivering newspapers.

Where did that interest in computers start? I’m not sure, but I can remember a couple of formative episodes. In the 1970s, I can remember my dad bringing home a (briefcase-sized) portable terminal, complete with acoustic coupler modem. I remember being blown away by the idea that you could just plop down near a phone and interact with computers all over the world. I also had a friend with a much older sister who married a guy who did IT consulting for a living. This guy had all the cool toys and lived a very nice lifestyle. That seemed like a great deal to me.

What was your first IT job?

By the time I got to college, I had enough PC experience that I got a work-study job doing tech support for the administrative departments at our school. I guess this was the first in a long line of IT support jobs.

Just before I arrived at school, the nascent Computer Science Program (not its own Department yet!) had received a grant to purchase a network of engineering workstations. The head of the program, in a moment of pure inspiration, decided to go with Sun Microsystems computers. But he didn’t have time to manage the network in addition to his teaching responsibilities. He drafted interested students to be System and Network Admins for our small network.

I was a user on that network and fascinated with the computer games of the time—Rogue, Nethack, XConq, and so on. I regularly maxed my disk quota installing these games in my home directory. The student admins got frustrated with this behavior and told me, “Here’s the root password. Install those games where everybody can use them!” The rest, as they say, is history. I spent my last three years at a very theoretical liberal arts college getting a vocational education in Unix System and Network Administration.

Towards the end of my senior year in college I knew that I did not want to head directly to grad school. I had been sending out resumes to local organizations that were using Sun computers—which I figured out by looking at the UUCP maps of the time—but was not getting much response. Then one day the phone rang in the CS lab. It was a recruiter looking to fill a Sun Admin role at AT&T Bell Labs. I interviewed and got the job, which was something of a miracle when I think back on some of the cringeworthy answers I gave during the interview!

What was your first cybersecurity job?

I was hired at AT&T Bell Labs Holmdel as a junior Sun System Administrator. The Labs at the time were engaged in the process of moving from mainframe Unix systems to distributed networks of primarily Sun systems. But my boss was also a big wheel in the internal Bell Labs computer security team, having caught an attacker who was abusing the AT&T long distance networks for many months. She saw that I had an interest in computer security and became an important mentor. Through her I got to meet Bell Labs infosec luminaries like Bill Cheswick and Steve Bellovin. Though I’ve had lots of different job titles over the years, all of my work since then has had at least some infosec component.

What advice would you have for someone who is looking to start a career in Information Technology?

Start by recognizing that there is no “ideal path”. Some of the best people in our industry got here by very roundabout paths. And despite what you might hear, they weren’t always “passionate” about computers or information security.

Get as broad an education as possible—and not just in STEM! I deliberately chose to go to a liberal arts school because I wanted to study many things besides science and engineering. And the perspectives I gained through that broad education have very much informed my technical career. And just maybe helped me avoid some of the burnout that is so prevalent in our industry.

Recognize that the technology you are training on today will not be around for your entire career. When I was going through school, CS classes were taught in the Pascal programming language! Learn fundamental concepts that you can apply to any technology—networking, routing, algorithms, data structures, cryptography, the “CIA triad” and so on. A long career in infosec is based on a broad knowledge of technology.

What might be some challenges or obstacles someone might face as they look at starting their career?

Standing out from the crowd seems to be the biggest problem for people starting out these days. There are a lot of folks who heard they can make a good living at computers and information security and there seem to be too many candidates vying for too few junior positions.

    Research and blogging seem to me the best path for getting noticed. While your peers are flogging their guts out for the latest certifications, maybe you could be spending your time doing original research and documenting your findings. The availability of free virtualization means it’s never been easier to create your own personal lab environment.

    A well-written blog shows interest and passion for the subject. It demonstrates your technical capabilities. And it also demonstrates your communication skills, which are becoming only more valuable in our industry.

    What challenges have you faced in your career?

    Simply having a career through multiple decades has been a challenge in and of itself. I’ve persevered through multiple recessions and various industry catastrophes. My key there has been diversification. I’ve been a system admin, a network admin, a security admin, a DBA, a network architect, a developer, a forensic analyst, an incident responder, an expert witness, a trainer and course author, a technical editor and author, and who knows what else. The best piece of advice I ever got was, “Learn one big new thing every year.” If you can do that, you will always find a way to be in demand.

    On a personal level, many of the challenges have been learning to get out of my own way. Being open to learning and understanding that I was not always the smartest person in the room. Understanding the perspectives of my customers and putting their needs ahead of what I thought was “right” from my narrow perspective. Realizing that process and documentation (if done properly!) actually make things better rather than just being a drag. And frankly just trying to be less of an asshole to everybody I interact with.

    What do you think are the most important non-technical skills for a student to learn?

    Communication skills are number one. Pick a subject that you know very well. Can you succinctly document your knowledge so that somebody new to the subject can understand at least the basic concepts? Now can you explain that subject 1-on-1 with another person? Can you explain it to a room full of people?

    How do you “think like a hacker?”

    From my perspective, this mindset is all about anticipating failure. When you’re a system and network admin, you begin to learn what architectures work and which ones don’t work. You learn where the points of failure tend to occur. Eventually this becomes a bit of a “sixth sense” that you internalize without even thinking about it.

    The hacker mindset is the same. What if this were to fail catastrophically? What could cause that to happen? Could I cause that to happen? How could I leverage that?

    What advice do you have to avoid burnout?

    I do believe in the usual advice. Have interests outside of computers and technology. Remember that you work so that you can live, not the other way around. Never be afraid to say, “No.” Take time off—and by that I mean real time off where you are not worrying about your job and day-to-day responsibilities.

    But this advice comes from a place of extreme privilege. Many of you are out there struggling to afford your lives, working in a corrosive job environment, and fighting battles that may be hard for others to understand. I see you.

    It is the responsibility for all of us with privilege to address some of the fundamental inequities in our society. Do whatever you can to make the world better for everybody. And I find that helping others works to combat burnout as well!

    What advice do you have about imposter syndrome?

    Every day in real life and on social media you are confronted with people who seem to be so much more confident and knowledgeable than you. But remember that you are only seeing at most 10% of who that person really is. Sure, if you compare 100% of you to the best 10% of every person you meet, you’re going to end up feeling not so good about yourself. But once you get to know these people, you realize that they have their own insecurities and “blind spots” just like you.

    Get comfortable with the idea that while you can never know everything, you do know something and that something has value. And you can share that thing you know with other people. And they can share what they know with you and others. And we can all get better.

    Why did you become a teacher?

    I grew up in a very welcoming technical community and was fortunate to be mentored by a great number of people—some well-known, some not. There was an understanding that when I reached a level of expertise that I would “pay it forward” by teaching the next generation. I take that very seriously.

    I was also fortunate to come of age in a time when large IT staffs were common. You would start work as a junior member of the team and receive on the job training from the senior admins. These days it seems like very small or even one person IT shops are the norm. You can’t learn everything from Google and Stack Overflow! Teaching and writing are my way of trying to provide that mentorship that I received in the early stages of my career.

    Is there a moment in your career that shaped your approach to teaching?

    I can remember watching Bill Cheswick present at one of the first USENIX conferences I attended. Bill was commanding the room with his knowledge while decked out in an Aloha shirt, cargo shorts, and Birkenstocks. And I realized that training didn’t have to be stuffy and academic. That was a powerful moment that’s stuck with me as I train others.

    Do you feel that teaching has made you more knowledgeable?

    You never learn a subject so thoroughly as when you need to teach it to somebody else. Creating course material and teaching have increased my understanding of technology in ways I never expected. And I learn things from my students every time I teach!

    I hear people saying, “But I’m not an expert! Nobody wants to listen to me teach!” Nonsense! Expertise is a subjective marker and many people underrate their abilities. Start teaching even before you think you’re ready. Watch how your understanding grows rapidly.

    Orphan Processes in Linux

    Orphan processes can sometimes cause confusion when analyzing live Linux systems. But during a recent run of my Linux Forensics class, one of my students showed me an interesting trick that I wanted to make more generally known.

    Consider a simple hierarchy of processes:

    UID          PID    PPID  C STIME TTY          TIME CMD
    root         725       1  0 12:53 ?        00:00:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
    root        1285     725  0 12:55 ?        00:00:00 sshd: lab [priv]
    lab         1335    1285  0 12:55 ?        00:00:00 sshd: lab@pts/0
    lab         1352    1335  0 12:55 pts/0    00:00:00 -bash
    lab         1415    1352  0 12:55 pts/0    00:00:00 ping 192.168.10.137
    

    At the top is the master SSH server for this system. Its parent process ID (PPID) is one, because it was started by systemd when the machine booted. Then we have the root-owned sshd process that was started when I connected to the system, and an unprivileged sshd because of the PrivilegeSeparation feature. That unprivileged sshd starts my bash login shell, and from that shell I fired up a ping command to run in the background. You can walk all the way back up the chain and in each case the PPID of each process is the PID of the process before it.

    This makes it easy to view these processes as a hierarchy using a tool like pstree:

    systemd(1)─┬─ModemManager(704)─┬─{ModemManager}(718)
               │                   └─{ModemManager}(723)
               ├─NetworkManager(669)─┬─{NetworkManager}(694)
               │                     └─{NetworkManager}(701)
               ├─...
               ├─sshd(725)───sshd(1285)───sshd(1335)───bash(1352)───ping(1415)
               ├─...

    But what happens when I exit my bash shell and leave the ping process running in the background?

    lab         1415       1  0 12:55 ?        00:00:00 ping 192.168.10.137

    My poor little ping process has become an orphan, and it’s PPID is now shown as one. This creates kind of a strange look in the pstree output:

    systemd(1)─┬─ModemManager(704)─┬─{ModemManager}(718)
               │                   └─{ModemManager}(723)
               ├─NetworkManager(669)─┬─{NetworkManager}(694)
               │                     └─{NetworkManager}(701)
               ├─...
               ├─ping(1415)
               ├─...

    Analysts can confuse an orphaned process with one that was started by systemd, and this creates an opportunity for bad actors to obfuscate processes that they started interactively.

    /proc to the Rescue!

    What my student pointed out to me is that the original PPID of the ping process is still tracked under the /proc/<pid> directory for the process. For example, /proc/1415/status shows the original PPID under the NSsid (Namespace Session ID) field:

    Name:   ping
    Umask:  0022
    State:  S (sleeping)
    Tgid:   1415
    Ngid:   0
    Pid:    1415
    PPid:   1
    TracerPid:      0
    Uid:    1000    1000    1000    1000
    Gid:    1000    1000    1000    1000
    FDSize: 256
    Groups: 24 25 27 29 30 44 46 108 113 117 120 1000
    NStgid: 1415
    NSpid:  1415
    NSpgid: 1415
    NSsid:  1352
    ...

    More tersely, you can see the original PPID as the sixth field in /proc/1415/stat:

    1415 (ping) S 1 1415 1352 0 -1 4194560 ...

    If you compare this with the output of the master sshd process that was actually started by systemd, you will notice a difference in this field. Here’s /proc/725/status:

    Name:   sshd
    Umask:  0022
    State:  S (sleeping)
    Tgid:   725
    Ngid:   0
    Pid:    725
    PPid:   1
    TracerPid:      0
    Uid:    0       0       0       0
    Gid:    0       0       0       0
    FDSize: 128
    Groups:
    NStgid: 725
    NSpid:  725
    NSpgid: 725
    NSsid:  725
    ...

    And here’s /proc/729/stat:

    725 (sshd) S 1 725 725 0 -1 4194560 ...

    In these cases, the process NSsid is the PID of the process started by systemd.

    That’s Session ID

    There’s one more subtlety at play here. I’m going to start two new ping processes: one in my login shell, and then one after I use sudo to become root:

    lab@LAB:~$ ping 192.168.10.137 >/dev/null &
    [1] 1492
    lab@LAB:~$ sudo -s
    root@LAB:/home/lab# ping 192.168.10.137 >/dev/null &
    [1] 1495
    root@LAB:/home/lab# pstree -d
    ...
    ───bash(1465)─┬─ping(1492)
                  └─sudo(1493)───bash(1494)─┬─ping(1495)
                                            └─pstree(1497)
    ...
    root@LAB:/home/lab# exit
    exit
    lab@LAB:~$ logout

    I’ve logged out of both shells to orphan both ping processes.

    Now I’ll log back in and check the NSsid of the two orphaned processes:

    lab@LAB:~$ grep NSsid: /proc/1492/status /proc/1495/status
    /proc/1492/status:NSsid:        1465
    /proc/1495/status:NSsid:        1465

    The parameter is called Name Space Session ID because it applies to the entire user session that is initiated when the user logs in. So even though I used sudo to become the root user, the NSsid is still the PID of my unprivileged login shell. Or, for example, if an attacker manages to escalate privilege in the middle of their session, you can still use the NSsid to tie together processes they may have started in their root shell with processes from the original unprivileged session.

    Web Shells and cron Jobs

    That got me curious about some other potential scenarios: web shells and cron jobs. I installed the nginx web server along with PHP-FPM. I created a simple web shell in PHP that just invoked any command I passed to it with system(). This means that PHP-FPM will invoke /bin/sh first, which will then execute the command I pass in. The process hierarchy ended up looking like this:

               |-php-fpm(8918)-+-php-fpm(8919)---sh(9074)---ping(9075)
               |               `-php-fpm(8920)

    The NSsid of the ping process was 8918— the PID of the master PHP-FPM process that was started by systemd.

    Next I set up a cron job that ran ping every minute in the background. I quickly stacked up a number of ping processes:

               |-ping(9199)
               |-ping(9209)
               |-ping(9216)
               |-ping(9225)

    Notice that the ping processes here have been orphaned and show no relation to the original cron process that launched them. When I checked the NSsid values of these processes, here is what I found:

    root@LAB:~# grep NSsid: /proc/9199/status /proc/9209/status /proc/9216/status /proc/9225/status
    /proc/9199/status:NSsid:        9198
    /proc/9209/status:NSsid:        9208
    /proc/9216/status:NSsid:        9214
    /proc/9225/status:NSsid:        9224
    root@LAB:~# ps -ef | grep cron
    root         665       1  0 12:53 ?        00:00:00 /usr/sbin/cron -f
    root@LAB:~# ls -ld /proc/9198
    ls: cannot access '/proc/9198': No such file or directory

    Each process had a distinct NSsid and none of them matched the PID of the parent cron process. Like system(), cron runs a shell to execute each cron job. But unlike the PHP-FPM example, each shell spawned by cron starts a new session with a new NSsid value. The ping processes were orphaned when shells exited after launching the ping processes.

    Bottom line is that processes that were started by systemd have their own PID as the NSsid. Processes started interactively, or launched from other services such as PHP-FPM or cron have some other PID in the NSsid field. Exactly what PID that is can vary depending upon how the process was launched. The NSsid field persists even when the process has been orphaned.

    Unfortunately, once the process has been orphaned, we can’t recreate the entire original process hierarchy without additional data that’s not available by default. To rebuild the process hierarchies you would need to be tracking exec() system calls with auditd or eBPF, or using a third-party tool.

    Skunkworks in the Clouds

    Skunks... clouds... get it?I was recently asked to make a guest appearance on a podcast related to information security in “the cloud”.  One of the participants brought up an interesting anecdote from one of his clients.  Apparently the IT group at this company had been approached by a member of their marketing team who was looking for some compute resources to tackle a big data crunching exercise.  The IT group responded that they were already overloaded and it would be months before they could get around to providing the necessary infrastructure.  Rebuffed but undeterred, the marketing person used their credit card to purchase sufficient resources from Amazon’s EC2 to process the data set and got the work done literally overnight for a capital cost of approximately $1800.

    There ensued the predictable horrified gasping from us InfoSec types on the podcast.  Nothing is more terrifying than skunkworks IT, especially on infrastructure not under our direct control.  “Didn’t they realize how insecure it was to do that?” “What will happen when all of our users realize how easily and conveniently they can do this?” “How can an organization control this type of risky behavior?” We went to bed immersed in our own paranoid but comfortable world-view.

    Since then, however, I’ve had the chance to talk with other people about this situation.  In particular, my friend John Sechrest delivered an intellectual “boot to the head” that’s caused me to consider the situation in a new light.  Apparently getting the data processed in a timely fashion was so critical to the marketing department that they figured out their own self-service plan for obtaining the IT resources they needed. If the project was that critical, John asked, was it reasonable from a business perspective for the IT group to effectively refuse to help their marketing department crunch this data?

    Maybe the IT group really was overloaded– most of them are these days.  However, the business of the company still needs to move forward, and the clever problem-solving monkeys in various parts of the organization will figure out ways to get their jobs done even without IT support. “Didn’t they realize how insecure it was to do that?”  No, and they didn’t care.  They needed to accomplish a goal, and they did.

    “What will happen when all of our users realize how easily and conveniently they can do this?” My guess is they’re going to start doing it a lot more.  Maybe that’s a good thing.  If the IT group is really overloaded, then perhaps it should think about actually empowering their users to do these kind of “one off” or prototype projects on their own without draining the resources of the core IT group.  Remember that if you let a thousand IT projects bloom, 999 of them are going to wither and die shortly thereafter.  Perhaps IT doesn’t need to waste time managing the death of the 999.

    “How can an organization control this type of risky behavior?” You probably can’t.  So perhaps your IT group should provide a secure offering that’s so compelling that your users will want to use your version rather than the commodity offerings that are so readily available.  This solution will have to be tailored to each company, but I think it starts with things like:

    • Pre-configured images with known baseline configurations and relevant tools so that groups can get up an running quickly without having to build and upload their own images.
    • Easy toolkits for migrating data and out of these images in a secure fashion, with some sort of DLP solution baked in.
    • Secure back-end storage to protect the data at rest in these images with no extra work on the part of the users.
    • Integration with the organization’s existing identity management and/or AAA framework so that users don’t have to re-implement their own solutions.
    • Integration with the organization’s auditing and logging infrastructures so you know what’s going on.

    Putting together the kind of framework described above is a major IT project, and will require input and participation from your user community.  But once accomplished, it could provide massive leverage to overtaxed IT organizations.  Rather than IT having to engineer everything themselves, they provide secure self-service building blocks to their customers and let them have at it.

    Providing architecture support and guidance in the early stages of each project is probably prudent.  After all, the one hardy little flower that blooms and refuses to die may become a critical resource to the organization that may eventually need to be moved back “in house”.  While the fact that the building blocks that were used to create the service are already well-integrated with the organization’s centralized IT infrastructure will help, having a reasonable architectural design from the start will also be a huge help when it comes time to migrate and continue scaling the service.

    Am I advocating skunkworks IT?  No, I like to think I’m advocating self-service IT on a grand scale.  You’ll see what skunkworks IT looks like if you ignore this issue and just let your users develop their own solutions because you’re too busy to help them.

    Never Sell Security As Security

    Some months ago, a fellow Information Security professional posted to one of the mailing lists I monitor, looking for security arguments to refute the latest skunkworks project from her sales department.  Essentially, one of the sales folks had developed a thick client application that connected to an internal customer database.  The plan was to equip all of the sales agents in the field with this application and allow them to connect directly back through the corporate firewall to the production copy of the database over an unencrypted link.  This seemed like a terrible idea, and the poster was looking to marshal arguments against deploying this software.

    The predictable discussion ensued, with everybody on the list enumerating the many reasons why this was a bad idea from an InfoSec perspective and in some cases suggesting work-arounds to spackle over deficiencies in the design of the system.  My advice was simpler– refute the design on Engineering principles rather than InfoSec grounds.  Specifically:

    • The system had no provision for allowing the users to work off-line or when the corporate database was unavailable.
    • While the system worked fine in the corporate LAN environment, bandwidth and latency issues over the Internet would probably render the application unusable.

    Sure enough, when confronted with these reasonable engineering arguments, the project was scrapped as unworkable.  The Information Security group didn’t need to waste any of their precious political capital shooting down this obviously bad idea.

    This episode ties into a motto I’ve developed during my career: “Never sell security as security.”  In general, Information Security only gets a limited number of trump cards they can play to control the architecture and deployment of all the IT-related projects in the pipeline.  So anything they can do to create IT harmony and information security without exhausting their hand is a benefit.

    It’s also useful to consider my motto when trying to get funding for Information Security related projects.  It’s been my experience that many companies will only invest in Information Security a limited number of times: “We spent $35K on a new firewall to keep the nasty hackers at bay and that’s all you get.”  To achieve the comprehensive security architecture you need to keep your organization safe, you need to get creative about aligning security procurement with other business initiatives.

    For example, file integrity assessment tools like Tripwire have an obvious forensic benefit when a security incident occurs, but the up-front cost of acquiring, deploying, and using these tools just for the occasional forensic benefit often makes them a non-starter for organizations.  However, if you change the game and point out that the primary ongoing benefit of these tools is as a control on your own change management processes, then they become something that the organization is willing to pay for.  You’ll notice that the nice folks at Tripwire realized this long ago and sell their software as “Configuration Control”, not “Security”.

    Sometimes you can get organizational support from even further afield.  I once sold an organization on using sudo with the blessings of Human Resources because it streamlined their employee termination processes: nobody knew the root passwords, so the passwords didn’t need to get changed every time somebody from IT left the company.  When we ran the numbers, this turned out to be a significant cost-savings for the company.

    So be creative and don’t go into every project with your Information Security blinders on.  There are lots of projects in the pipeline that may be bad ideas from an Information Security perspective, but it’s likely that they have other problems as well.  You can use those problems as leverage to implement architectures that are more efficient and rational from an Engineering as well as from an Information Security perspective.  Similarly there are critical business processes that the Information Security group can leverage to implement necessary security controls without necessarily spending Information Security’s capital (or political) budget.

    We’re In Your Kernel, Escalating Our Privs

    Ed Skoudis and Mike Poor were kind enough to invite me to sit in on their recent SANS webcast round-table about emerging security threats.  During the webcast I was discussing some emerging attack trends against the Linux kernel, which I thought I would also jot down here for those of you who don’t have time to sit down and listen to the webcast recording.

    Over the last several months, I’ve been observing a noticable uptick in the number of denial-of-service (DoS) conditions reported in the Linux kernel.  What that says to me is that there are groups out there who are scrutinizing the Linux kernel source code looking for vulnerabilities.  Frankly, I doubt they’re after DoS attacks– it’s much more interesting to find an exploit that gives you control of computing resources rather than one that lets you take them away from other people.

    Usually when people go looking for vulnerabilities in an OS kernel they’re looking for privilege escalation attacks.  The kernel is often the easiest way to get elevated priviliges on the system.  Indeed, in the past few weeks there have been a couple [1] [2] of fixes for local privilege escalation vulnerabilities checked into the Linux kernel code.  So not only are these types of vulnerabilities being sought after, they’re being found (and probably used).

    Now “local privilege escalation” means that the attacker has already found their way into the system as an unprivileged user.  Which begs the question, how are the attackers achieving their first goal of unprivileged access?  Well certainly there are enough insecure web apps running on Linux systems for attackers to have a field day.  But as I was pondering possible attack vectors, I had an uglier thought.

    A lot of the public Cloud Computing providers make virtualized Linux images available to their customers.  The Cloud providers have to allow essentially unlimited open access to their services to anybody who wants it– this is, after all, their entire business model.  So in this scenario, the attacker doesn’t need an exploit to get unprivileged access to a Unix system: they get it as part of the Terms of Service.

    What worries me is attackers that pair their local privilege escalation exploits with some sort of “virtualization escape” exploit, allowing them hypervisor level access to the Cloud provider’s infrastructure.  That’s a nightmare scenario, because now the attacker potentially has access to other customers’ jobs running in that computing infrastructure in a way that will likely be largely undetectable by those customers.

    Now please don’t mistake me.  As far as we know, this scenario has not occurred.  Furthermore, I’m willing to believe that the Cloud providers supply generally higher levels of security than many of their customers could do on their own (the Cloud providers having the resources to get the “pick of the litter” when it comes to security expertise).  At the same time, the scenario I paint above has got to be an attractive one for attackers, and it’s possible we’re seeing the precursor traces of an effort to mount such an attack in the future.

    So to all of you playing around in the Clouds I say, “Watch the skies!”