Linux Forensics - Collecting a Triage Image Using The UAC Tool
- thatdfirdude
- Apr 27, 2024
- 7 min read
Updated: Oct 26
Updated on October 26, 2025 - Section on moving UAC to/from vCenter/ESX
Let’s discuss a topic that I feel like doesn’t get enough coverage or is the “unspoken” or “daunting” territory of Digital Forensics and Incident Response (DFIR). I’m going to say the words… LINUX FORENSICS. Oh yeah, this blog post is going to discuss how to collect a triage image of a nix box. This is a tool that you need to add to your toolkit and have at the ready when it comes to triaging nix systems. Before we talk about the tool, let’s talk about why a “triage image” and what this is exactly.
Modern-day DFIR is all about getting to the important stuff faster, right? (of course, depending on what your role is, capabilities and questions you’re trying to answer). In order to respond effectively to an active incident, we need to know where the data is, how to collect it, quickly parse it, and identify actionable data in order to effectively respond and combat a threat. As you can imagine, if you’re responding to an active incident, you’re likely not going to want to collect a full-disk image. This can take not only a long time to collect, its very large to store, time-consuming to upload the data to the proper team, time-consuming to download, and very time-consuming to parse into your tools and begin searching through the noise. Sure, you’ll have pivot-points and whatnot, but when it comes to intrusion analysis, do we really need to be collecting pictures, videos, and the content of every single file on the filesystem? Why don’t we just focus on the important things regarding the intrusion? This can include various logging sources, filesystem, other artifacts, etc. All the important things to identify what the actor did, when they did it, how they did it, and collect that intelligence to eradicate the threat to move towards recovery. This is where “triage-analysis” comes into play. For Windows systems, you may have heard of Eric Zimmerman’s (Kroll), fantastic tool named “KAPE” or Kroll Artifact Parser and Extractor. But what do we have for Linux? Majority of files on Linux are just that… files. So, as you can imagine, data collected will be a bit different than that of a Windows system.
During this blog post, we’ll discuss a fantastic tool called “Unix like Artifact Collector” or simply, UAC. This can be downloaded from here https://github.com/tclahr/uac and it’s a fantastic tool. I won’t explain the tool in all its glory, but in a nutshell, it’s a shell script file (with a few configuration files), that will collect the important data on a nix based system for intrusion analysis, or forensics in general. This will collect items such as browser history, /var/log contents containing many Linux logs, tmp files, Linux Body File (which is an export of the EXT filesystem), live system information such as netstat output, running processes, configuration files found in /etc, and much, much more. You can even customize your own collection profile that contains specific files you want to collect! I mean, what else can you ask for? Essentially, all you need is a nix based system with shell access. So, if you’re dealing with a compromised appliance such as a firewall that’s *nix based, ESX server, storage device such as a NAS, this tool will likely be perfect for you!

Just remember, this is a “tool”. It will not “find evil” for you, but it will collect the data for your review and provide it in a nicely wrapped package for you!
Some of the main features described on the GitHub for the tool are:
Run everywhere with no dependencies (no installation required).
Customizable and extensible collections and artifacts.
Respect the order of volatility during artifact collection.
Collect information from processes running without a binary on disk.
Hash running processes and executable files.
Extract information from files and directories to create a bodyfile (including enhanced file attributes for ext4).
Collect user and system configuration files and logs.
Collect artifacts from applications.
Acquire volatile memory from Linux systems using different methods and tools.
Okay, now that I’ve piqued your interest, let’s discuss a scenario here.
You are notified of suspicious activity on your ESX host. This host triggered an alert on your Network Security Monitoring (NSM) appliance, such as an IDS (Intrusion Detection System) for a signature matching that of a Linux-based crypto-miner. Essentially, someone is using your resources from your server to make money. The nerve! Let’s use a tool to collect a triage image of this server in order to get a better understanding of what we’re dealing with here, collect intel, and use this intel to scope the incident and ensure this is contained!
Let’s start by downloading the tool. As mentioned, this can be found here: https://github.com/tclahr/uac. Be sure to download the latest release.

Excellent! Now let’s go ahead and move this file to our system of interest. This can be done remotely using SSH, SCP, TFTP, external storage, file-share, etc. Whatever works for you in this scenario!

Perfect, let’s decompress this archive now. This can be done with the command “tar -xf uac”. You'll now see the decompressed folder named 'uac-2.8.0'.

Once fully decompressed, we can now see all the files associated with this! Including our profile configuration and more. Again, I won't go into detail about what each file does but in summary:
Artifacts - Contains a listing of what to collect. This can be customized
Bin - You can place additional binaries here that you may want to use for your collections
Tools - Various *nix scripts the collector may use as part of its normalization
Profiles - Contains the profiles of the collectors. These essentially define what "artifacts" will be collected. By default, this comes with three profiles, but you can make your own
uac - The shell script file

In this scenario, for the sake of time, we’ll go ahead and use the default profiles, which are “full” and “ir triage”. To execute this, its simple! Just navigate to the directory where the decompressed tool is and run the following command sudo /uac -p full output_directory_here
The below command indicates executing the 'uac' file, using the profile 'ir_triage' and placing the output in my current working directory.

Shown below are the artifacts collected by this profile.

Let the tool do its thing! This will begin collecting a large number of artifacts. Note that the above example used the “ir_triage” profile, which will collect less data versus “full”. The collector will now start. Depending on how much data there is to collect and the size of the system, processing power, etc. the time to collect all requested data will vary.

Boom! We have out output nicely compressed for us!

Let’s take a look at some of the data.
/var/log directory
/etc files including ‘shadow’ and ‘passwd’.
Items in the ‘tmp’ directory
And of course, live data!
Here's some of the "live_response data collected

/var/log content

As mentioned, it even collected the bodyfile of the filesystem! So you can use a tool such as "mactime" to sort this and use it to review files that have presence on the system!

Moving UAC To and From ESX/vSphere
As a note, there are likely numerous ways to accomplish how to move UAC to and from your ESX server, so I'll only list a few of the most common ways that I've seen in my experience. What works for you may vary based on your environment or your particular incident environment that you may be working within
Common Techniques
SCP
SFTP
Datastore Browser
SCP/SFTP
In my experience, using SCP to move files between your ESX server has been extremely straightforward. This can be accomplished in two ways: Via a GUI based tool such as WinSCP or via the CLI using the SCP command. Obviously, you'll need to make sure SCP is enabled and allowed beforehand.
To use the CLI, you'll just need to run a similar command such as:
scp /path/to/local/file root@<esxi-ip>:/path/to/remote/destination
This will allow you to move a file FROM your host system to your ESX server.
To move a file from the ESX server to your host, for collection, you can reverse this command and use something such as
scp root@<esxi-ip>:/path/to/remote/file /path/to/local/outputTo utilize a CLI tool such as SFTP, you can use a similar command to connect to the ESX host, like you did using SSH. For example, scp root@esxi-ip. Once connected, you can use the commands GET to retrieve a file or PUT to place a file on the server.
To move or collect a file using a GUI based tool over SCP, such as WinSCP, you can literally just drag and drop the files.


VMware DataStore Browser
A datastore within VMware is basically a logical container that allows you to store files on the ESX server or VMs. It can be browsed to via the ESX/vSphere web UI. Once here, you can click into the ESX storage, click DataStore Browser, and drag and drop the file or download from the server to your host. Once a file is placed on the server, you can then execute or interact with it via SSH. Lets break this down:
First, navigate to the WEB UI, which can be accessed by opening a browser and typing in the IP of the ESX server or the Fully Qualified Domain Name (FQDN). Once here, you can sign in and navigate the WEB UI. You'll see an option for "Storage" > Datastore > DataStore Browser. This will open the file browser for ESX.

Once inside of the DataStore, you can create a new folder and upload/download files.


To view the content of the datastore, once a file is uploaded, you can SSH to your ESX server and navigate to /VMFS/Volumes/<volume identifier>/Datastore_Name.


Again, the interesting part of this tool is the fact that its simple! Yet amazing! It’s using various built in *nix tools to run a multitude of commands and placing this in an easy-to-understand archive. It even retains the directory structure! With this data, we can now analyze it! You can place this in your favorite forensics tool, since this retained the directory structure, it gets treated as if you uploaded an image. You can then start parsing through the logs or doing keyword searches.
If you do DFIR, whether internally or as an MSSP, this is a tool you need to have at the ready. Consider making your own profiles that fit your organization and needs.



