Introduction
Artemis is a powerful command line digital forensic and incident response
(DFIR) tool that collects forensic data from Windows, macOS, and Linux
endpoints. Its primary focus is: speed, ease of use, and low resource usage.
Notable features so far:
- Setup collections using basic TOML files
- Parsing support for large amount of forensic artifacts (25+)
- Output to JSON or JSONL file(s)
- Can output results to local system or upload to cloud services.
- Embedded JavaScript runtime via Deno
- Can be used as a library via artemis-core
- MIT license
The goal of this book is to provide a comprehensive guide on how to use
artemis
and artemis-core
.
Has this been tested on real incidents?
NO
artemis
is a new forensic tool written from scratch and it has not been tested
in any production environment. It does however have an extensive test suite and
has been carefully developed to make sure the data it produces is accurate.
If you are looking for a free and open-source forensic tool to lead an investigation, two (2) great options are:
- The cross platform forensic tool Velociprator
- Windows only but still excellent Zimmerman tools
During the development of artemis
both of these tools were used to provide
verification that the output of artemis
is correct.
If you looking are for free and open-source forensic tool to add to your
forensic toolkit or to casually review forensic data or compare the results of
other forensic tools then artemis
is a great option.
Over time as artemis
matures, bugs are found and fixed, and feeback is given
this statement will be updated when artemis
ready to lead real world
investigations.
artemis vs artemis-core
artemis
is an executable that can be executed on Windows, macOS, or Linux
systems.
artemis-core
is a library that can be imported to an application to parse
forensic data. artemis
imports the artemis-core
library to perform all of
its forensic parsing.
Contributing
You can find the source code on GitHub. If you find a bug feel free to open an issue. If you would like to contribute, please read the CONTRIBUTING guide prior to starting.
License
artemis, artemis-api, artemis-scripts, and this book are released under the MIT License
Installation
Currently only Windows, macOS, and Linux binaries from GitHub Releases are provided. For now these binaries are unsigned. Any other binaries from 3rd party services (crates.io, Homebrew, Chocolatey, etc) are unofficial. Support for additional distribution services may be considered in the future.
Supported Systems
Currently artemis
has been tested on the following types of systems:
- Windows 8.1 and higher. Arch: 64-bit
- macOS Catalina and higher. Arch: 64-bit and ARM
- Ubuntu, Fedora, Arch Linux. Arch: 64-bit and ARM
If you would like support for another OS or architecture please open an issue.
GitHub Releases
Once downloaded for you platform from GitHub, extract the binary from the archive and you should be able to start collecting forensic data!
Build from Source
You may also build artemis from source.
In order build artemis you will need to install the Rust programming langague.
Instructions to install Rust can be found on the
Rust Homepage.
Once Rust
is installed you can download the source code for artemis using
git
:
git clone https://github.com/puffycid/artemis
Navigate to your downloaded repo and run:
cargo build
By default cargo builds a debug
version of the binary. If you want to build
the release
version (recommended) of the binary run:
cargo build --release
The release
version will be much faster and smaller than the debug
version. The compiled binary will be located at:
<path to artemis repo>\target\debug\artemis
for the debug version<path to artemis repo>\target\release\artemis
for the release version
CLI Options
artemis
is designed to have a very simple CLI menu. Almost all of the code is
in the artemis-core
library. In fact the only things the artemis
binary does
is:
- Provide the TOML collection file/data to the
artemis-core
library. - Provide CLI args
Running Artemis
Once you have installed artemis
you can access its help
menu with the command below:
artemis -h
Usage: artemis [OPTIONS]
Options:
-t, --toml <TOML> Full path to TOML collector
-d, --decode <DECODE> Base64 encoded TOML file
-j, --javascript <JAVASCRIPT> Full path to JavaScript file
-h, --help Print help
-V, --version Print version
As mentioned, the artemis
binary is really just a small wrapper that provides
a TOML collection definition to artemis-core
. There are two (2) ways to
provided TOML collections:
- Provide the full path the TOML file on disk
- base64 encode a TOML file and provide that as an argument
The artemis
source code provides several pre-made TOML collection files that
can used as examples.
For example on macOS we downloaded the
processes.toml
file from the artemis
repo to the same directory as the macOS artemis
binary and ran using sudo
sudo ./artemis -t processes.toml
[artemis] Starting artemis collection!
[artemis] Finished artemis collection!
On Windows we downloaded the
processes.toml
file from the artemis
repo to the same directory as the Windows artemis
binary and ran using Administrator privileges
artemis.exe -t processes.toml
[artemis] Starting artemis collection!
[artemis] Finished artemis collection!
Both processes.toml
files tell artemis
to output the results to a directory
called tmp/process_collection
in the current directory and output using
jsonl
format
./tmp
└── process_collection
└── d7f89e7b-fcd8-42e8-8769-6fe7eaf58bee.jsonl
To run the same collection except as a base64 encoded string on macOS we can do the following:
sudo ./artemis -d c3lzdGVtID0gIm1hY29zIgoKW291dHB1dF0KbmFtZSA9ICJwcm9jZXNzX2NvbGxlY3Rpb24iCmRpcmVjdG9yeSA9ICIuL3RtcCIKZm9ybWF0ID0gImpzb25sIgpjb21wcmVzcyA9IGZhbHNlCmVuZHBvaW50X2lkID0gImFiZGMiCmNvbGxlY3Rpb25faWQgPSAxCm91dHB1dCA9ICJsb2NhbCIKCltbYXJ0aWZhY3RzXV0KYXJ0aWZhY3RfbmFtZSA9ICJwcm9jZXNzZXMiICMgTmFtZSBvZiBhcnRpZmFjdApbYXJ0aWZhY3RzLnByb2Nlc3Nlc10KbWV0YWRhdGEgPSB0cnVlICMgR2V0IGV4ZWN1dGFibGUgbWV0YWRhdGEKbWQ1ID0gdHJ1ZSAjIE1ENSBhbGwgZmlsZXMKc2hhMSA9IGZhbHNlICMgU0hBMSBhbGwgZmlsZXMKc2hhMjU2ID0gZmFsc2UgIyBTSEEyNTYgYWxsIGZpbGVz
[artemis] Starting artemis collection!
[artemis] Finished artemis collection!
On Windows it would be (using Administrator privileges again):
artemis.exe -d c3lzdGVtID0gIndpbmRvd3MiCgpbb3V0cHV0XQpuYW1lID0gInByb2Nlc3Nlc19jb2xsZWN0aW9uIgpkaXJlY3RvcnkgPSAiLi90bXAiCmZvcm1hdCA9ICJqc29uIgpjb21wcmVzcyA9IGZhbHNlCmVuZHBvaW50X2lkID0gImFiZGMiCmNvbGxlY3Rpb25faWQgPSAxCm91dHB1dCA9ICJsb2NhbCIKCltbYXJ0aWZhY3RzXV0KYXJ0aWZhY3RfbmFtZSA9ICJwcm9jZXNzZXMiICMgTmFtZSBvZiBhcnRpZmFjdApbYXJ0aWZhY3RzLnByb2Nlc3Nlc10KbWV0YWRhdGEgPSB0cnVlICMgR2V0IGV4ZWN1dGFibGUgbWV0YWRhdGEKbWQ1ID0gdHJ1ZSAjIE1ENSBhbGwgZmlsZXMKc2hhMSA9IGZhbHNlICMgU0hBMSBhbGwgZmlsZXMKc2hhMjU2ID0gZmFsc2UgIyBTSEEyNTYgYWxsIGZpbGVz
[artemis] Starting artemis collection!
[artemis] Finished artemis collection!
JavaScript
You can also execute JavaScript code using artemis
.
// https://raw.githubusercontent.com/puffycid/artemis-api/master/src/windows/processes.ts
function getWinProcesses(md5, sha1, sha256, pe_info) {
const hashes = {
md5,
sha1,
sha256,
};
const data = Deno.core.ops.get_processes(
JSON.stringify(hashes),
pe_info,
);
const results = JSON.parse(data);
return results;
}
// main.ts
function main() {
const md5 = false;
const sha1 = false;
const sha256 = false;
const pe_info = false;
const proc_list = getWinProcesses(md5, sha1, sha256, pe_info);
console.log(proc_list[0].full_path);
return proc_list;
}
main();
Executing the above code
sudo ./artemis -j ../../artemis-core/tests/test_data/deno_scripts/vanilla.js
[artemis] Starting artemis collection!
[runtime]: "/usr/libexec/nesessionmanager"
[artemis] Finished artemis collection!
See section on Scripting to learn more!
Why does artemis need elevated privileges?
The goal for artemis
is to parse endpoint forensic artifacts. Many of these
artifacts can only be accessed with elevated privileges. If you try running
artemis
as a standard user, depending on what you want to collect you will
encounter permission errors.
The artemis-core
library does not and will never
directly* modify anything on disk. It only writes results to a file
if specified in the TOML collection.
* Modifying data
The main goal of most endpoint based live forensic tools is to collect data and not change anything on the endpoint. By not directly modifying files on disk we can accomplish most of this goal.
However, simply running a program on a computer can cause indirect changes to the OS that are outside of our control. Some of these indirect changes can include:
- Allocating and deallocting memory
- Logs generated by the OS when an application is executed
- Analytics generated by the OS when an application is executed
- Access timestamps are changed when opening a file for reading
Despite these indirect changes we should still be comfortable that the endpoint
data collected by the artemis-core
library was not directly modified by
artemis-core
and if we run a different program that program should get the
same results as artemis-core
(disregarding any changes made by the OS)
Collection Overview
In order to collect forensic data artemis
needs a TOML
collection that defines what data should be collected. This TOML collection can
either be a file or a base64 encoded TOML file.
The core parts of a TOML collection are:
- The target OS (Windows, macOS, or Linux)
- The output configuration such as output and format type
- A list of artifacts to collect
Format
An example TOML collection is provided below:
system = "windows"
[output]
name = "amcache_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
url = ""
api_key = ""
filter_name = ""
filter_script = ""
logging = "warn"
[[artifacts]]
artifact_name = "amcache"
filter = true
[artifacts.amcache]
# Optional
# alt_drive = 'C'
system
Defines what OS this collection targets. This example targetswindows
systems. This collection will only run with the Windows version ofartemis
[output]
Defines the output configurationname
The output name. This can be any string valuedirectory
The directory where the output should be written. This example outputs to a directory calledtmp
in the current working directoryformat
The output format can be eitherjson
orjsonl
compress
Whether to compress the output withgzip
compression. Once the collection is complete the output directory will be compressed withzip
compression.endpoint_id
An ID assigned to the endpoint. This can be any string valuecollection_id
A number assigned to the collection. This can be any postive numberoutput
The output type. Supports:local, gcp, aws, or azure
url
The URL associated with eithergcp, aws, or azure
. This is required only if using remote upload outputapi_key
The API key associated with eithergcp, aws, or azure
. This is required only if using remote upload outputfilter_name
The name of the providedfilter_script
. This is optional but if you are using afilter_script
you should provide a name. Otherwise the default nameUnknownFilterName
is usedfilter_script
An advanced optional output option, will pass the results of each[[artifacts]]
entry into a script. See scripting section for detailed overview of this option.logging
Set the logging level for artemis. This is optional by defaultartemis
will log errors and warnings. Valid options are:warn, error, debug, or info
[[artifacts]]
A list of artifacts to collectartifact_name
Name of aritfactfilter
Whether to filter the artifact data through thefilter_script
. This is optional by default nothing is filtered[aritfacts.amcache]
Artifact configuration parametersalt_drive
Use an alternative drive when collecting data. This parameter is optional
The example above collects one (1) artifact (Amcache
) on a Windows system and
outputs the results the local system at the path ./tmp/amcache_collection
If we wanted to collect more than one (1) artifact we could use a collection like the one below:
system = "windows"
[output]
name = "execution_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "amcache"
[artifacts.amcache]
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup"
The TOML collection above collects both amcache
and shortcuts
data on a
Windows system and outputs the results to the local system at the path
./tmp/execution_collection
.
Notable changes:
name
our collection is now named execution_collection
[[artifacts]]
artifact_name = "amcache"
[artifacts.amcache]
Since the alt_drive
parameter is optional for amcache
we do not need to
specifiy it
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup"
[[artifacts]]
The second entry in our list of artifacts to collectartifact_name
Name of aritfact[aritfacts.shortcuts]
Artifact configuration parameterspath
Use the provided path to collectshortcuts
data. This parameter is required
Since [[artifacts]]
is a list we can even provide the same artifact multiple
times:
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup"
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "D:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup"
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "E:\\Users\\rust\\Downloads"
However, providing the same artifact mutliple times can be repetitive. See the
chapter on scripting to see how we can automate and
enhance artifact collection using artemis
and a tiny amount of JavaScript
!
Finally you can review the full list of all supported artifacts and their configuration under the artifact chapter
Artemis Output Formats
artemis
supports two (2) types of output formats:
jsonl
and json
. Both types will output the results
using a random uuid for the filename such as
68330d32-c35e-4d43-8655-1cb5e9d90b83.json
When you run artemis
three (3) types of files will be generated:
<uuuid>.{json or jsonl}
a unique filename dependent on the format selected. These files contain the artifact data output. Depending on the collection multiple<uuuid>.{json or jsonl}
files will be created<uuid>.log
a log file containing any errors or warnings generated byartemis
during the collection. Only one (1) per collection will existstatus.log
a log file that maps the<uuuid>.{json or jsonl}
to an artifact name.<uuuid>.{json or jsonl}
also contains the artifact name. Thestatus.log
provides a quick way to see what files contain a specific artifact. Only one (1) per collection will exist
The json
output from the amcache
TOML collection from the previous page
would look like the following:
{
"metadata": {
"endpoint_id": "6c51b123-1522-4572-9f2a-0bd5abd81b82",
"id": 1,
"uuid": "41bc55e4-bc7b-4798-8808-4351092595a5",
"artifact_name": "amcache",
"complete_time": 1680466070,
"start_time": 1680466065,
"hostname": "DESKTOP-UQQDFT8",
"os_version": "11 (22000)",
"platform": "Windows",
"kernel_version": "22000",
"load_performance": {
"avg_one_min": 0.0,
"avg_five_min": 0.0,
"avg_fifteen_min": 0.0
}
},
"data": [
{
"first_execution": 1641252583,
"path": "c:\\program files (x86)\\windows kits\\10\\debuggers\\x86\\1394\\1394kdbg.sys",
"name": "1394kdbg.sys",
"original_name": "1394dbg.sys",
"version": "10.0.19041.685 (winbuild.160101.0800)",
"binary_type": "pe32_i386",
"product_version": "10.0.19041.685",
"product_name": "microsoft® windows® operating system",
"language": "",
"file_id": "",
"link_date": "10/28/2087 21:21:59",
"path_hash": "1394kdbg.sys|2912931c5988cc06",
"program_id": "00a68cd0bda5b35cd2f03e8556cad622f00000904",
"size": "38352",
"publisher": "microsoft corporation",
"usn": "4010442296",
"sha1": "",
"reg_path": "{11517B7C-E79D-4e20-961B-75A811715ADD}\\Root\\InventoryApplicationFile\\1394kdbg.sys|2912931c5988cc06"
}
]
}
All artifacts parsed by artemis
will be formatted similar to the output above.
metadata
object that contains metadata about the system. All artifacts will contain a metadata objectendpoint_id
The ID associated with the endpoint. This is from theTOML
inputid
The ID associated with the collection. This is from theTOML
inputuuid
Unique ID associated with the outputartifact_name
The name of the artifact collected. This is from theTOML
inputcomplete_time
The timeartemis
completed parsing the datastart_time
The timeartemis
started parsing the datahostname
The hostname of the endpointos_version
Thes OS version of the endpointplatform
The platform of the endpoint. Ex: Windows or macOSkernel_version
The kernel version of the endpointload_performance
The endpoint performance for one, five, and fifteen minutes. On Windows these values are always zeroavg_one_min
Average load performance for one minuteavg_five_mine
Average load performance for five minutesavg_fifteen_min
Average load performance for fifteen minutes
data
object that contains the artifact specific data. See the artifact chapter for output structure for each artifact. If you executeJavaScript
you can control what thedata
value is. For example you can return a string instead of an object.artemis
uses serde to serialize the final output
This data would be saved in a <uuid>.json
file
The jsonl
output from the amcache
TOML collection from the previous page
would look like the following:
{"metadata":{"endpoint_id":"6c51b123-1522-4572-9f2a-0bd5abd81b82","id":1,"artifact_name":"amcache","complete_time":1680467122,"start_time":1680467120,"hostname":"DESKTOP-UQQDFT8","os_version":"11 (22000)","platform":"Windows","kernel_version":"22000","load_performance":{"avg_one_min":0.0,"avg_five_min":0.0,"avg_fifteen_min":0.0},"uuid":"64702816-0f24-4e6e-a72a-118cb51c55b4"},"data":{"first_execution":1641252583,"path":"c:\\program files (x86)\\windows kits\\10\\debuggers\\x86\\1394\\1394kdbg.sys","name":"1394kdbg.sys","original_name":"1394dbg.sys","version":"10.0.19041.685 (winbuild.160101.0800)","binary_type":"pe32_i386","product_version":"10.0.19041.685","product_name":"microsoft® windows® operating system","language":"","file_id":"","link_date":"10/28/2087 21:21:59","path_hash":"1394kdbg.sys|2912931c5988cc06","program_id":"00a68cd0bda5b35cd2f03e8556cad622f00000904","size":"38352","publisher":"microsoft corporation","usn":"4010442296","sha1":"","reg_path":"{11517B7C-E79D-4e20-961B-75A811715ADD}\\Root\\InventoryApplicationFile\\1394kdbg.sys|2912931c5988cc06"}}
{"metadata":{"endpoint_id":"6c51b123-1522-4572-9f2a-0bd5abd81b82","id":1,"artifact_name":"amcache","complete_time":1680467122,"start_time":1680467120,"hostname":"DESKTOP-UQQDFT8","os_version":"11 (22000)","platform":"Windows","kernel_version":"22000","load_performance":{"avg_one_min":0.0,"avg_five_min":0.0,"avg_fifteen_min":0.0},"uuid":"5afa02eb-1e11-48a0-993e-3bb852667db7"},"data":{"first_execution":1641252583,"path":"c:\\program files (x86)\\windows kits\\10\\debuggers\\x64\\1394\\1394kdbg.sys","name":"1394kdbg.sys","original_name":"1394dbg.sys","version":"10.0.19041.685 (winbuild.160101.0800)","binary_type":"pe64_amd64","product_version":"10.0.19041.685","product_name":"microsoft® windows® operating system","language":"","file_id":"","link_date":"11/30/2005 17:06:22","path_hash":"1394kdbg.sys|7e05880d5bf9d27b","program_id":"00a68cd0bda5b35cd2f03e8556cad622f00000904","size":"47568","publisher":"microsoft corporation","usn":"4010568800","sha1":"","reg_path":"{11517B7C-E79D-4e20-961B-75A811715ADD}\\Root\\InventoryApplicationFile\\1394kdbg.sys|7e05880d5bf9d27b"}}
...
{"metadata":{"endpoint_id":"6c51b123-1522-4572-9f2a-0bd5abd81b82","id":1,"artifact_name":"amcache","complete_time":1680467122,"start_time":1680467120,"hostname":"DESKTOP-UQQDFT8","os_version":"11 (22000)","platform":"Windows","kernel_version":"22000","load_performance":{"avg_one_min":0.0,"avg_five_min":0.0,"avg_fifteen_min":0.0},"uuid":"bce5fccc-9f13-40cd-bebd-95a32ead119a"},"data":{"first_execution":1641252542,"path":"c:\\program files\\git\\mingw64\\bin\\ziptool.exe","name":"ziptool.exe","original_name":"","version":"","binary_type":"pe64_amd64","product_version":"","product_name":"","language":"","file_id":"","link_date":"01/01/1970 00:00:00","path_hash":"ziptool.exe|7269435f129e6e01","program_id":"01286cf3cc5f1d161abf355f10fee583c0000ffff","size":"162258","publisher":"","usn":"3869400664","sha1":"","reg_path":"{11517B7C-E79D-4e20-961B-75A811715ADD}\\Root\\InventoryApplicationFile\\ziptool.exe|7269435f129e6e01"}}
{"metadata":{"endpoint_id":"6c51b123-1522-4572-9f2a-0bd5abd81b82","id":1,"artifact_name":"amcache","complete_time":1680467122,"start_time":1680467120,"hostname":"DESKTOP-UQQDFT8","os_version":"11 (22000)","platform":"Windows","kernel_version":"22000","load_performance":{"avg_one_min":0.0,"avg_five_min":0.0,"avg_fifteen_min":0.0},"uuid":"8437907f-53a4-43a2-8ff4-22acb3d06d72"},"data":{"first_execution":1641252542,"path":"c:\\program files\\git\\usr\\bin\\[.exe","name":"[.exe","original_name":"","version":"","binary_type":"pe64_amd64","product_version":"","product_name":"","language":"","file_id":"","link_date":"01/01/1970 00:00:00","path_hash":"[.exe|b6eac39997c90239","program_id":"01286cf3cc5f1d161abf355f10fee583c0000ffff","size":"68322","publisher":"","usn":"3870610520","sha1":"","reg_path":"{11517B7C-E79D-4e20-961B-75A811715ADD}\\Root\\InventoryApplicationFile\\[.exe|b6eac39997c90239"}}
The jsonl
output is identical to json
with the following differences:
- The values in
data
are split into separate lines instead of an array - The
uuid
is unique for eachjson
line
This data would be saved in a <uuid>.jsonl
file
The <uuid>.log
output from a collection contains any errors or warnings
encountered during the collection.
The status.log
output from a collection maps the <uuuid>.{json or jsonl}
to
an artifact name. A possible example from the macOS UnifiedLogs
unifiedlogs:d45221df-349b-4467-b726-a9446865b259.json
unifiedlogs:eccd7b5b-4941-4134-a790-b073eb992188.json
As mentioned and seen above you can also check the actual
<uuid>.{json or jsonl}
files to find the artifact_name
Compression
If you choose to enable compression for the output artemis
will compress each
<uuid>.{json or jsonl}
using gzip
compression. The files will be saved as
<uuid>.{json or jsonl}.gz
. The log files are not compressed.
Once the collection is complete artemis
will compress the whole output
directory into a zip
file and remove the output directory. Leaving only the
zip
file.
Since artemis
is running using elevated privileges it uses a cautious approach
to deleting its data:
- It gets a list of files in its output directory and deletes files one at a time that end in: json, jsonl, gz, or log
- Once all output files are deleted, it will delete the empty directory.
Remote Uploads
artemis
has basic support for uploading collections to three (3) external
cloud services:
- Google Cloud Platform (GCP)
- Microsoft Azure
- Amazon Web Services (AWS)
Uploading collections to a remote serivce requires three (3) steps:
- Name of remote service. Valid options are:
"gcp", "azure", "aws"
- URL to the remote service
- A base64 encoded API key formatted based on the remote service select in step 1.
An example TOML Collection is below:
system = "windows"
[output]
name = "shimcache_collection"
directory = "hostname"
format = "json"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "gcp"
url = "https://storage.googleapis.com/upload/storage/v1/b/<INSERT BUCKET NAME>" # Make sure to include GCP Bucket name
api_key = "ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsCiAgInByb2plY3RfaWQiOiAiZmFrZW1lIiwKICAicHJpdmF0ZV9rZXlfaWQiOiAiZmFrZW1lIiwKICAicHJpdmF0ZV9rZXkiOiAiLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tXG5NSUlFdndJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLa3dnZ1NsQWdFQUFvSUJBUUM3VkpUVXQ5VXM4Y0tqTXpFZll5amlXQTRSNC9NMmJTMUdCNHQ3TlhwOThDM1NDNmRWTXZEdWljdEdldXJUOGpOYnZKWkh0Q1N1WUV2dU5Nb1NmbTc2b3FGdkFwOEd5MGl6NXN4alptU25YeUNkUEVvdkdoTGEwVnpNYVE4cytDTE95UzU2WXlDRkdlSlpxZ3R6SjZHUjNlcW9ZU1c5YjlVTXZrQnBaT0RTY3RXU05HajNQN2pSRkRPNVZvVHdDUUFXYkZuT2pEZkg1VWxncDJQS1NRblNKUDNBSkxRTkZOZTdicjFYYnJoVi8vZU8rdDUxbUlwR1NEQ1V2M0UwRERGY1dEVEg5Y1hEVFRsUlpWRWlSMkJ3cFpPT2tFL1owL0JWbmhaWUw3MW9aVjM0YktmV2pRSXQ2Vi9pc1NNYWhkc0FBU0FDcDRaVEd0d2lWdU5kOXR5YkFnTUJBQUVDZ2dFQkFLVG1qYVM2dGtLOEJsUFhDbFRRMnZwei9ONnV4RGVTMzVtWHBxYXNxc2tWbGFBaWRnZy9zV3FwalhEYlhyOTNvdElNTGxXc00rWDBDcU1EZ1NYS2VqTFMyang0R0RqSTFaVFhnKyswQU1KOHNKNzRwV3pWRE9mbUNFUS83d1hzMytjYm5YaEtyaU84WjAzNnE5MlFjMStOODdTSTM4bmtHYTBBQkg5Q044M0htUXF0NGZCN1VkSHp1SVJlL21lMlBHaElxNVpCemo2aDNCcG9QR3pFUCt4M2w5WW1LOHQvMWNOMHBxSStkUXdZZGdmR2phY2tMdS8ycUg4ME1DRjdJeVFhc2VaVU9KeUtyQ0x0U0QvSWl4di9oekRFVVBmT0NqRkRnVHB6ZjNjd3RhOCtvRTR3SENvMWlJMS80VGxQa3dtWHg0cVNYdG13NGFRUHo3SURRdkVDZ1lFQThLTlRoQ08yZ3NDMkk5UFFETS84Q3cwTzk4M1dDRFkrb2krN0pQaU5BSnd2NURZQnFFWkIxUVlkajA2WUQxNlhsQy9IQVpNc01rdTFuYTJUTjBkcml3ZW5RUVd6b2V2M2cyUzdnUkRvUy9GQ0pTSTNqSitramd0YUE3UW16bGdrMVR4T0ROK0cxSDkxSFc3dDBsN1ZuTDI3SVd5WW8ycVJSSzNqenhxVWlQVUNnWUVBeDBvUXMycmVCUUdNVlpuQXBEMWplcTduNE12TkxjUHZ0OGIvZVU5aVV2Nlk0TWowU3VvL0FVOGxZWlhtOHViYnFBbHd6MlZTVnVuRDJ0T3BsSHlNVXJ0Q3RPYkFmVkRVQWhDbmRLYUE5Z0FwZ2ZiM3h3MUlLYnVRMXU0SUYxRkpsM1Z0dW1mUW4vL0xpSDFCM3JYaGNkeW8zL3ZJdHRFazQ4UmFrVUtDbFU4Q2dZRUF6VjdXM0NPT2xERGNRZDkzNURkdEtCRlJBUFJQQWxzcFFVbnpNaTVlU0hNRC9JU0xEWTVJaVFIYklIODNENGJ2WHEwWDdxUW9TQlNOUDdEdnYzSFl1cU1oZjBEYWVncmxCdUpsbEZWVnE5cVBWUm5LeHQxSWwySGd4T0J2YmhPVCs5aW4xQnpBK1lKOTlVekM4NU8wUXowNkErQ210SEV5NGFaMmtqNWhIakVDZ1lFQW1OUzQrQThGa3NzOEpzMVJpZUsyTG5pQnhNZ21ZbWwzcGZWTEtHbnptbmc3SDIrY3dQTGhQSXpJdXd5dFh5d2gyYnpic1lFZll4M0VvRVZnTUVwUGhvYXJRbllQdWtySk80Z3dFMm81VGU2VDVtSlNaR2xRSlFqOXE0WkIyRGZ6ZXQ2SU5zSzBvRzhYVkdYU3BRdlFoM1JVWWVrQ1pRa0JCRmNwcVdwYklFc0NnWUFuTTNEUWYzRkpvU25YYU1oclZCSW92aWM1bDB4RmtFSHNrQWpGVGV2Tzg2RnN6MUMyYVNlUktTcUdGb09RMHRtSnpCRXMxUjZLcW5ISW5pY0RUUXJLaEFyZ0xYWDR2M0NkZGpmVFJKa0ZXRGJFL0NrdktaTk9yY2YxbmhhR0NQc3BSSmoyS1VrajFGaGw5Q25jZG4vUnNZRU9OYndRU2pJZk1Qa3Z4Ris4SFE9PVxuLS0tLS1FTkQgUFJJVkFURSBLRVktLS0tLVxuIiwKICAiY2xpZW50X2VtYWlsIjogImZha2VAZ3NlcnZpY2VhY2NvdW50LmNvbSIsCiAgImNsaWVudF9pZCI6ICJmYWtlbWUiLAogICJhdXRoX3VyaSI6ICJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20vby9vYXV0aDIvYXV0aCIsCiAgInRva2VuX3VyaSI6ICJodHRwczovL29hdXRoMi5nb29nbGVhcGlzLmNvbS90b2tlbiIsCiAgImF1dGhfcHJvdmlkZXJfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9vYXV0aDIvdjEvY2VydHMiLAogICJjbGllbnRfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9yb2JvdC92MS9tZXRhZGF0YS94NTA5L2Zha2VtZSIsCiAgInVuaXZlcnNlX2RvbWFpbiI6ICJnb29nbGVhcGlzLmNvbSIKfQo="
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
WARNING
Currently artemis
does not securely protect the remote API key. Make sure the
account associated with the API has only permissions needed by artemis
. The
only permission(s) artemis
requires is the ability create/write data to a
bucket.
In addition, make sure the account only has access to a dedicated bucket for
artemis
.
For example:
- Create a bucket called
artemis-uploads
- Create an an account called
artemis-uploader
and generate an API key - Only allow the account
artemis-uploader
to upload data toartemis-uploads
. It has no other access.
If you do not want to expose the remote API key, you can output the data to a local directory, network share, or external drive. Then upload the data using an alternative tool.
GCP
The GCP upload process is based on the upload process Velociraptor uses https://velociraptor.velocidex.com/triage-with-velociraptor-pt-3-d6f63215f579.
High Level Steps:
- Create a bucket. Make sure the bucket is not public. This bucket will
hold the data uploaded by
artemis
. - Create a service account with no permissions.
- Create and download the service account key. This should be a JSON file.
- Assign the service account access to the newly created bucket. The service account should only need Storage Object Creator
- Base64 encode the service account JSON file
- Create TOML collection and use
https://storage.googleapis.com/upload/storage/v1/b/<BUCKETNAME>
for yoururl
. Use the base64 encoded string from step 5 as yourapi_key
- Execute
artemis
and provide TOML collection as either file or base64 encoded argument - Delete the service account key once you are done collecting data using
artemis
An example TOML Collection is below:
system = "windows"
[output]
name = "shimcache_collection"
directory = "dev-workstations"
format = "jsonl"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "gcp"
url = "https://storage.googleapis.com/upload/storage/v1/b/shimcache-gcp-bucket" # Make sure to include GCP Bucket name
api_key = "ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsCiAgInByb2plY3RfaWQiOiAiZmFrZW1lIiwKICAicHJpdmF0ZV9rZXlfaWQiOiAiZmFrZW1lIiwKICAicHJpdmF0ZV9rZXkiOiAiLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tXG5NSUlFdndJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLa3dnZ1NsQWdFQUFvSUJBUUM3VkpUVXQ5VXM4Y0tqTXpFZll5amlXQTRSNC9NMmJTMUdCNHQ3TlhwOThDM1NDNmRWTXZEdWljdEdldXJUOGpOYnZKWkh0Q1N1WUV2dU5Nb1NmbTc2b3FGdkFwOEd5MGl6NXN4alptU25YeUNkUEVvdkdoTGEwVnpNYVE4cytDTE95UzU2WXlDRkdlSlpxZ3R6SjZHUjNlcW9ZU1c5YjlVTXZrQnBaT0RTY3RXU05HajNQN2pSRkRPNVZvVHdDUUFXYkZuT2pEZkg1VWxncDJQS1NRblNKUDNBSkxRTkZOZTdicjFYYnJoVi8vZU8rdDUxbUlwR1NEQ1V2M0UwRERGY1dEVEg5Y1hEVFRsUlpWRWlSMkJ3cFpPT2tFL1owL0JWbmhaWUw3MW9aVjM0YktmV2pRSXQ2Vi9pc1NNYWhkc0FBU0FDcDRaVEd0d2lWdU5kOXR5YkFnTUJBQUVDZ2dFQkFLVG1qYVM2dGtLOEJsUFhDbFRRMnZwei9ONnV4RGVTMzVtWHBxYXNxc2tWbGFBaWRnZy9zV3FwalhEYlhyOTNvdElNTGxXc00rWDBDcU1EZ1NYS2VqTFMyang0R0RqSTFaVFhnKyswQU1KOHNKNzRwV3pWRE9mbUNFUS83d1hzMytjYm5YaEtyaU84WjAzNnE5MlFjMStOODdTSTM4bmtHYTBBQkg5Q044M0htUXF0NGZCN1VkSHp1SVJlL21lMlBHaElxNVpCemo2aDNCcG9QR3pFUCt4M2w5WW1LOHQvMWNOMHBxSStkUXdZZGdmR2phY2tMdS8ycUg4ME1DRjdJeVFhc2VaVU9KeUtyQ0x0U0QvSWl4di9oekRFVVBmT0NqRkRnVHB6ZjNjd3RhOCtvRTR3SENvMWlJMS80VGxQa3dtWHg0cVNYdG13NGFRUHo3SURRdkVDZ1lFQThLTlRoQ08yZ3NDMkk5UFFETS84Q3cwTzk4M1dDRFkrb2krN0pQaU5BSnd2NURZQnFFWkIxUVlkajA2WUQxNlhsQy9IQVpNc01rdTFuYTJUTjBkcml3ZW5RUVd6b2V2M2cyUzdnUkRvUy9GQ0pTSTNqSitramd0YUE3UW16bGdrMVR4T0ROK0cxSDkxSFc3dDBsN1ZuTDI3SVd5WW8ycVJSSzNqenhxVWlQVUNnWUVBeDBvUXMycmVCUUdNVlpuQXBEMWplcTduNE12TkxjUHZ0OGIvZVU5aVV2Nlk0TWowU3VvL0FVOGxZWlhtOHViYnFBbHd6MlZTVnVuRDJ0T3BsSHlNVXJ0Q3RPYkFmVkRVQWhDbmRLYUE5Z0FwZ2ZiM3h3MUlLYnVRMXU0SUYxRkpsM1Z0dW1mUW4vL0xpSDFCM3JYaGNkeW8zL3ZJdHRFazQ4UmFrVUtDbFU4Q2dZRUF6VjdXM0NPT2xERGNRZDkzNURkdEtCRlJBUFJQQWxzcFFVbnpNaTVlU0hNRC9JU0xEWTVJaVFIYklIODNENGJ2WHEwWDdxUW9TQlNOUDdEdnYzSFl1cU1oZjBEYWVncmxCdUpsbEZWVnE5cVBWUm5LeHQxSWwySGd4T0J2YmhPVCs5aW4xQnpBK1lKOTlVekM4NU8wUXowNkErQ210SEV5NGFaMmtqNWhIakVDZ1lFQW1OUzQrQThGa3NzOEpzMVJpZUsyTG5pQnhNZ21ZbWwzcGZWTEtHbnptbmc3SDIrY3dQTGhQSXpJdXd5dFh5d2gyYnpic1lFZll4M0VvRVZnTUVwUGhvYXJRbllQdWtySk80Z3dFMm81VGU2VDVtSlNaR2xRSlFqOXE0WkIyRGZ6ZXQ2SU5zSzBvRzhYVkdYU3BRdlFoM1JVWWVrQ1pRa0JCRmNwcVdwYklFc0NnWUFuTTNEUWYzRkpvU25YYU1oclZCSW92aWM1bDB4RmtFSHNrQWpGVGV2Tzg2RnN6MUMyYVNlUktTcUdGb09RMHRtSnpCRXMxUjZLcW5ISW5pY0RUUXJLaEFyZ0xYWDR2M0NkZGpmVFJKa0ZXRGJFL0NrdktaTk9yY2YxbmhhR0NQc3BSSmoyS1VrajFGaGw5Q25jZG4vUnNZRU9OYndRU2pJZk1Qa3Z4Ris4SFE9PVxuLS0tLS1FTkQgUFJJVkFURSBLRVktLS0tLVxuIiwKICAiY2xpZW50X2VtYWlsIjogImZha2VAZ3NlcnZpY2VhY2NvdW50LmNvbSIsCiAgImNsaWVudF9pZCI6ICJmYWtlbWUiLAogICJhdXRoX3VyaSI6ICJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20vby9vYXV0aDIvYXV0aCIsCiAgInRva2VuX3VyaSI6ICJodHRwczovL29hdXRoMi5nb29nbGVhcGlzLmNvbS90b2tlbiIsCiAgImF1dGhfcHJvdmlkZXJfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9vYXV0aDIvdjEvY2VydHMiLAogICJjbGllbnRfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9yb2JvdC92MS9tZXRhZGF0YS94NTA5L2Zha2VtZSIsCiAgInVuaXZlcnNlX2RvbWFpbiI6ICJnb29nbGVhcGlzLmNvbSIKfQo="
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
Azure
The Azure upload process is based on the Azure Blob upload process Velociraptor uses https://docs.velociraptor.app/docs/offline_triage/remote_uploads.
High level steps:
- Create a Storage Account
- Create a Container under the new Storage Account
- Add a Role Assignment to the Storage Account
- Generate a Shared Access Signature (SAS) Policy for the created Container in step 2. Make sure to only allow create and write access
- Copy the Blob SAS URL
- Create a TOML collection and use the Blob SAS URL for the
url
option - Execute
artemis
and provide TOML collection as either file or base64 encoded argument
An example TOML Collection is below:
system = "windows"
[output]
name = "shimcache_collection"
directory = "dev-workstations"
format = "jsonl"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "gcp"
url = "https://uploadertest.blob.core.windows.net/uploads?sp=cw....." # Make sure to you copied the Blob SAS URL
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
AWS
The AWS upload is based on the upload process Velociraptor uses https://docs.velociraptor.app/blog/2020/2020-07-14-triage-with-velociraptor-pt-4-cf0e60810d1e
High level steps:
- Create a S3 bucket. Make sure the bucket is not public. This bucket will
hold the data uploaded by
artemis
. - Create a new user. This user does not need access to the AWS Console
- Create a new policy.
- Only S3 PutObject permission is required
- Limit the policy to only apply to the created bucket in step 1.
- Create a new User Group. Add user created in step 2. Apply policy created in Step 3.
- Create Access Keys for the user created in step 2. Create a JSON blob formatted like below:
{
"bucket": "yourbucketname",
"secret": "yoursecretfromyouraccount",
"key": "yourkeyfromyouraccount",
"region": "yourbucketregion"
}
- Create TOML collection and use
https://s3.amazonaws.com
for yoururl
. Base64 encode the JSON blob from step 5 as yourapi_key
- Execute
artemis
and provide TOML collection as either file or base64 encoded argument - Delete the API key once you are done collecting data using
artemis
An example TOML Collection is below:
system = "windows"
[output]
name = "shimcache_collection"
directory = "dev-workstations"
format = "jsonl"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "aws"
url = "https://s3.amazonaws.com"
api_key = "ewogICAgImJ1Y2tldCI6ICJibGFoIiwKICAgICJzZWNyZXQiOiAicGtsNkFpQWFrL2JQcEdPenlGVW9DTC96SW1hSEoyTzVtR3ZzVWxSTCIsCiAgICAia2V5IjogIkFLSUEyT0dZQkFINlRPSUFVSk1SIiwKICAgICJyZWdpb24iOiAidXMtZWFzdC0yIgp9"
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
Artifacts Overview
artemis
supports over 20 different types of artifacts. All of these artifacts
can be collected from a TOML collection or from a simple JavaScript
script
A breakdown of artifacts by OS is below.
Windows
Currently artemis
has been tested on Windows 8.1 and higher. artemis
supports multiple complex binary artifacts on Windows such as:
NTFS
-artemis
can parse the rawNTFS
disk using the ntfs crateRegistry
-artemis
can parseRegistry
files on diskESE
-artemis
can parseESE
database files on diskEvent Logs
-artemis
can parseEvent Logs
using the evtx crate
A main focus point of the library artemis-core
is to make a best effort to not
rely on the Windows APIs. Since artemis-core
is a forensic focused library, we
do not want to rely on APIs from a potentially compromised system.
However, artemis-core
does use the Windows API for a handful of artifacts:
Processes
- The sysinfo crate is used to pull a process listing using Windows APIsSysteminfo
- The sysinfo crate is also to get system information using Winodws APIs- The Windows API is also used to decompress proprietary Windows compression
algorithms.
- Both
Prefetch
and someNTFS
files may be compressed,artemis-core
will attempt to use Windows API to decompress these files
- Both
Amcache
Windows Amcache
stores metadata related to execution of Windows applications.
Data is stored in the C:\Windows\appcompat\Programs\Amcache.hve
Registry file.
This Registry file also contains other metadata such as OS, hardware, and
application info. However, artemis
will only collect data related to the
execution of Windows applications.
The Registry
artifact may be used if you want to collect the full Registry
data from Amcache.hve
.
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "amcache_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "amcache"
[artifacts.amcache]
# Optional
# alt_drive = 'D'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingAmcache
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of Amcache
entries
export interface Amcache {
/**Timestamp when the application was first executed in UNIXEPOCH seconds */
first_execution: number;
/**Path to application */
path: string;
/**Name of application */
name: string;
/**Original name of application from PE metadata */
original_name: string;
/**Version of application from PE metadata */
version: string;
/**Executable type and arch information */
binary_type: string;
/**Application product version from PE metadata */
product_version: string;
/**Application product name from PE metadata */
product_name: string;
/**Application language */
language: string;
/**Application file ID. This is also the SHA1 hash */
file_id: string;
/**Application linking timestamp as MM/DD/YYYY HH:mm:ss*/
link_date: string;
/**Hash of application path */
path_hash: string;
/**Program ID associated with the application */
program_id: string;
/**Size of application */
size: string;
/**Application publisher from PE metadata */
publisher: string;
/**Application Update Seqeuence Number (USN) */
usn: string;
/**SHA1 hash of the first ~31MBs of the application */
sha1: string;
/**Path in the Amcache.hve file */
reg_path: string;
}
BITS
Windows Background Intelligent Transfer Service (BITS
) is a service that
allows applications and users to register jobs to upload/download file(s).
It is commonly used by applications to download updates. Starting on Windows 10
BITS data is stored in an ESE database. Pre-Windows 10 it is stored in a
proprietary binary format
BITS
data is stored at C:\ProgramData\Microsoft\Network\Downloader\qmgr.db
Other Parsers:
- BitsParser
- Bits_Parser (Only supports pre-Windows 10 BITS files)
References:
TOML Collection
system = "windows"
[output]
name = "bits_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "bits"
[artifacts.bits]
carve = true
# Optional
# alt_path = "D:\\ProgramData\\Microsoft\\Network\\Downloader\\qmgr.db"
Collection Options
carve
Boolean value to carve deletedBITS
jobs and files fromqmgr.db
alt_path
Use an alternative path to theqmgr.db
file. This configuration is optional. By defaultartemis
will use%systemdrive%\ProgramData\Microsoft\Network\Downloader\qmgr.db
Output Structure
A Bits
object that contains an array of jobs and carved jobs and files
export interface Bits {
/**Array of data containing BITS info */
bits: BitsInfo[];
/**Array of carved jobs */
carved_jobs: Jobs[];
/**Array of carved files */
carved_files: Files[];
}
/**
* Combination of parsed Jobs and File info from BITS
*/
export interface BitsInfo {
/**ID for the Job */
job_id: string;
/**ID for the File */
file_id: string;
/**SID associated with the Job */
owner_sid: string;
/**Timestamp when the Job was created in UNIXEPOCH seconds */
created: number;
/**Timestamp when the Job was modified in UNIXEPOCH seconds */
modified: number;
/**Timestamp when the Job was completed in UNIXEPOCH seconds */
completed: number;
/**Timestamp when the Job was expired in UNIXEPOCH seconds */
expiration: number;
/**Files associated with the Job */
files_total: number;
/**Number of bytes downloaded */
bytes_downloaded: number;
/**Number of bytes transferred */
bytes_transferred: number;
/**Name associated with Job */
job_name: string;
/**Description associated with Job */
job_description: string;
/**Commands associated with Job */
job_command: string;
/**Arguments associated with Job */
job_arguments: string;
/**Error count with the Job */
error_count: number;
/**BITS Job type */
job_type: string;
/**BITS Job state */
job_state: string;
/**Job priority */
priority: string;
/**BITS Job flags */
flags: string;
/**HTTP Method associated with Job */
http_method: string;
/**Full file path associated with Job */
full_path: string;
/**Filename associated with Job */
filename: string;
/**Target file path associated with Job */
target_path: string;
/**TMP file path associated with the JOb */
tmp_file: string;
/**Volume path associated with the file */
volume: string;
/**URL associated with the Job */
url: string;
/**If the BITS info was carved */
carved: boolean;
/**Transient error count with Job */
transient_error_count: number;
/**Permissions associated with the Job */
acls: AccessControl[];
/**Job timeout in seconds */
timeout: number;
/**Job retry delay in seconds */
retry_delay: number;
/**Additional SIDs associated with Job */
additional_sids: string[];
}
/**
* Jobs from BITS
*/
export interface Jobs {
/**ID for the Job */
job_id: string;
/**ID for the File */
file_id: string;
/**SID associated with the Job */
owner_sid: string;
/**Timestamp when the Job was created in UNIXEPOCH seconds */
created: number;
/**Timestamp when the Job was modified in UNIXEPOCH seconds */
modified: number;
/**Timestamp when the Job was completed in UNIXEPOCH seconds */
completed: number;
/**Timestamp when the Job was expired in UNIXEPOCH seconds */
expiration: number;
/**Name associated with Job */
job_name: string;
/**Description associated with Job */
job_description: string;
/**Commands associated with Job */
job_command: string;
/**Arguments associated with Job */
job_arguments: string;
/**Error count with the Job */
error_count: number;
/**BITS Job type */
job_type: string;
/**BITS Job state */
job_state: string;
/**Job priority */
priority: string;
/**BITS Job flags */
flags: string;
/**HTTP Method associated with Job */
http_method: string;
/**Transient error count with Job */
transient_error_count: number;
/**Permissions associated with the Job */
acls: AccessControl[];
/**Job timeout in seconds */
timeout: number;
/**Job retry delay in seconds */
retry_delay: number;
}
/**
* File(s) associated with Jobs
*/
export interface Files {
/**ID for the File */
file_id: string;
/**Files associated with the JOb */
files_transferred: number;
/**Number of bytes downloaded */
download_bytes_size: number;
/**Number of bytes transferred */
trasfer_bytes_size: number;
/**Fulll file path associated with Job */
full_path: string;
/**Filename associated with Job */
filename: string;
/**Target file path associated with Job */
target_path: string;
/**TMP file path associated with the JOb */
tmp_file: string;
/**Volume path associated with the file */
volume: string;
/**URL associated with the Job */
url: string;
}
Event Logs
Windows Event Logs
are the primary files associated with logging system
activity. They are stored in a binary format, typically at
C:\Windows\System32\winevt\Logs
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "eventlog_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "eventlogs"
[artifacts.eventlogs]
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingEvent Logs
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of EventLogRecord
entries
export interface EventLogRecord {
/**Event record number */
record_id: number;
/**Timestamp of eventlog message in UNIXEPOCH nanoseconds */
timestamp: number;
/**
* JSON object representation of the Eventlog message
* Depending on the log the JSON object may have different types of keys
* Example entry:
* ```
* "data": {
* "Event": {
* "#attributes": {
* "xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"
* },
* "System": {
* "Provider": {
* "#attributes": {
* "Name": "Microsoft-Windows-Bits-Client",
* "Guid": "EF1CC15B-46C1-414E-BB95-E76B077BD51E"
* }
* },
* "EventID": 3,
* "Version": 3,
* "Level": 4,
* "Task": 0,
* "Opcode": 0,
* "Keywords": "0x4000000000000000",
* "TimeCreated": {
* "#attributes": {
* "SystemTime": "2022-10-31T04:24:19.946430Z"
* }
* },
* "EventRecordID": 2,
* "Correlation": null,
* "Execution": {
* "#attributes": {
* "ProcessID": 1332,
* "ThreadID": 780
* }
* },
* "Channel": "Microsoft-Windows-Bits-Client/Operational",
* "Computer": "DESKTOP-EIS938N",
* "Security": {
* "#attributes": {
* "UserID": "S-1-5-18"
* }
* }
* },
* "EventData": {
* "jobTitle": "Font Download",
* "jobId": "174718A5-F630-43D9-B378-728240ECE152",
* "jobOwner": "NT AUTHORITY\\LOCAL SERVICE",
* "processPath": "C:\\Windows\\System32\\svchost.exe",
* "processId": 1456,
* "ClientProcessStartKey": 844424930132016
* }
* }
* }
* ```
*/
data: Record<string, unknown>;
}
Files
A regular Windows filelisting. artemis
uses the
walkdir crate to recursively walk the files
and directories on the system. If hashing or PE
parsing is enabled this will
update the Last Accessed
timestamps on files since the native OS APIs are used
to access the files and it will fail on any locked files. Use
RawFiles to bypass locked files.
The standard Rust API does not support getting Changed/Entry Modified
timestamp on Windows. Use RawFiles to include the
Changed/Entry Modified
timestamp.
Since a filelisting can be extremely large every 100k entries artemis
will
output the data and then continue.
Other Parsers:
- Any tool that can recursively list files and directories
References:
- N/A
TOML Collection
system = "windows"
[output]
name = "files_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "files" # Name of artifact
[artifacts.files]
start_path = "C:\\Windows" # Where to start the listing
# Optional
depth = 1 # How many sub directories to descend
# Optional
metadata = true # Get PE metadata
# Optional
md5 = true # MD5 all files
# Optional
sha1 = false # SHA1 all files
# Optional
sha256 = false # SHA256 all files
# Optional
path_regex = "" # Regex for paths
# Optional
file_regex = "" # Regex for files
Collection Options
start_path
Where to start the file listing. Must exist on the endpoint. To start at root useC:\\
. This configuration is requireddepth
Specify how many directories to descend from thestart_path
. Default is one (1). Must be a postive number. Max value is 255. This configuration is optionalmetadata
Get PE data fromPE
files. This configuration is optional. Default is falsemd5
Boolean value to enable MD5 hashing on all files. This configuration is optional. Default is falsesha1
Boolean value to enable SHA1 hashing on all files. This configuration is optional. Default is falsesha256
Boolean value to enable SHA256 hashing on all files. This configuration is optional. Default is falsepath_regex
Only descend into paths (directories) that match the provided regex. This configuration is optional. Default is no Regexfile_regex
Only return entres that match the provided regex. This configuration is optional. Default is no Regex
Output Structure
An array of WindowsFileInfo
entries
export interface WindowsFileInfo {
/**Full path to file or directory */
full_path: string;
/**Directory path */
directory: string;
/**Filename */
filename: string;
/**Extension of file if any */
extension: string;
/**Created timestamp in UNIXEPOCH seconds */
created: number;
/**Modified timestamp in UNIXEPOCH seconds */
modified: number;
/**Changed timestamp in UNIXEPOCH seconds */
changed: number;
/**Accessed timestamp in UNIXEPOCH seconds */
accessed: number;
/**Size of file in bytes */
size: number;
/**Inode associated with entry */
inode: number;
/**Mode of file entry */
mode: number;
/**User ID associated with file */
uid: number;
/**Group ID associated with file */
gid: number;
/**MD5 of file */
md5: string;
/**SHA1 of file */
sha1: string;
/**SHA256 of file */
sha256: string;
/**Is the entry a file */
is_file: boolean;
/**Is the entry a directory */
is_directory: boolean;
/**Is the entry a symbolic links */
is_symlink: boolean;
/**Depth the file from provided start poin */
depth: number;
/**PE binary metadata */
binary_info: PeInfo[];
}
Jumplists
Windows Jumplists
files track opened files via applications in the Taskbar or
Start Menu. Jumplists are actually a collection of embedded
Shortcut files and therefore can show evidence of file
interaction.
There are two (2) types of Jumplist files:
- Custom - Files that are pinned to Taskbar applications
- Automatic - Files that are not pinned to Taskbar applications
Other parsers:
References:
TOML Collection
system = "windows"
[output]
name = "jumplists_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "jumplists"
[artifacts.jumplists]
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingJumplists
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of Jumplists
entries
export interface Jumplists {
/**Path to Jumplist file */
path: string;
/**Jupmlist type. Custom or Automatic */
jumplist_type: string;
/**Application ID for Jumplist file */
app_id: string;
/**Metadata associated with Jumplist entry */
jumplist_metadata: DestEntries;
/**Shortcut information for Jumplist entry */
lnk_info: Shortcut;
}
/**
* Metadata associated with Jumplist entry
*/
interface DestEntries {
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
droid_volume_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
droid_file_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
birth_droid_volume_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
birth_droid_file_id: string;
/**Hostname associated with Jumplist entry */
hostname: string;
/**Jumplist entry number */
entry: number;
/**Modified timestamp of Jumplist entry in UNIXEPOCH seconds */
modified: number;
/**Status if Jumplist entry is pinned. `Pinned` or `NotPinned` */
pin_status: string;
/**Path associated with Jumplist entry */
path: string;
}
Portable Executable
Windows Portable Executable (PE
) is executable format for applications on
Windows. artemis
is able to parse basic metadata from PE
files using the
pelite crate.
Other Parsers:
References:
TOML Collection
There is no way to collect just PE
data with artemis
instead it is an
optional feature for the Windows filelisting
, rawfilelisting
, and
processes
artifacts.
However, it is possible to directly parse PE
files by using JavaScript
. See
the scripts chapter for examples.
Collection Optaions
N/A
Output Structure
An object containing PE
info
export interface PeInfo {
/**Array of imported DLLs */
imports: string[];
/**Array of section names */
sections: string[];
/**Base64 encoded certificate information */
cert: string;
/**Path to PDB file */
pdb: string;
/**PE product version */
product_version: string;
/**PE file version */
file_version: string;
/**PE product name */
product_name: string;
/**PE company name */
company_name: string;
/**PE file description */
file_description: string;
/**PE internal name */
internal_name: string;
/**PE copyright */
legal_copyright: string;
/**PE original filename */
original_filename: string;
/**PE manifest info */
manifest: string;
/**Array of base64 icons */
icons: string[];
}
Prefetch
Windows Prefetch
data tracks execution of applications on Windows
Workstations. Prefetch
files are typically located at C:\Windows\Prefetch
.
On Windows servers Prefetch
is disabled and may also be disabled on systems
with SSDs. Starting on Windows 10, the Prefetch
files are compressed using
LZXPRESS Huffman
. artemis
uses the Windows API to decompress the data before
parsing Prefetch
fiels
Other Parsers:
References: Libyal
TOML Collection
system = "windows"
[output]
name = "prefetch_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "prefetch"
[artifacts.prefetch]
# Optional
# alt_drive = 'D'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingPrefetch
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of Prefetch
entries
export interface Prefetch {
/**Path to prefetch file */
path: string;
/**Name of executed file */
filename: string;
/**Prefetch hash */
hash: string;
/**Most recent execution timestamp in UNIXEPOCH seconds */
last_run_time: number;
/**Array of up to eight (8) execution timestamps in UNIXEPOCH seconds */
all_run_times: number[];
/**Number of executions */
run_count: number;
/**Size of executed file */
size: number;
/**Array of volume serial numbers associated with accessed files/directories */
volume_serial: string[];
/**Array of volume creation timestamps in UNIXEPOCH seconds associated with accessed files/directories */
volume_creation: number[];
/**Array of volumes associated accessed files/directories */
volume_path: string[];
/**Number of files accessed by executed file */
accessed_file_count: number;
/**Number of directories accessed by executed file */
accessed_directories_count: number;
/**Array of accessed files by executed file */
accessed_files: string[];
/**Array of accessed directories by executed file */
accessed_directories: string[];
}
Processes
Gets a standard process listing using the Windows API
Other Parsers:
- Any tool that calls the Windows API or can parse the raw Windows memory
References:
- N/A
TOML Collection
system = "windows"
[output]
name = "processes_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "processes"
[artifacts.processes]
metadata = true
# MD5 hash process binary
md5 = true
# SHA1 hash process binary
sha1 = false
# SHA256 hash process binary
sha256 = false
Collection Options
metadata
Get PE data from process binary.md5
Boolean value to MD5 hash process binarysha1
Boolean value to SHA1 hash process binarysha256
Boolean value to SHA256 hash process binary
Output Structure
An array of WindowsProcessInfo
entries
export interface WindowsProcessInfo {
/**Full path to the process binary */
full_path: string;
/**Name of process */
name: string;
/**Path to process binary */
path: string;
/** Process ID */
pid: number;
/** Parent Process ID */
ppid: number;
/**Environment variables associated with process */
environment: string;
/**Status of the process */
status: string;
/**Process arguments */
arguments: string;
/**Process memory usage */
memory_usage: number;
/**Process virtual memory usage */
virtual_memory_usage: number;
/**Process start time in UNIXEPOCH seconds*/
start_time: number;
/** User ID associated with process */
uid: string;
/**Group ID associated with process */
gid: string;
/**MD5 hash of process binary */
md5: string;
/**SHA1 hash of process binary */
sha1: string;
/**SHA256 hash of process binary */
sha256: string;
/**PE metadata asssociated with process binary */
binary_info: PeInfo[];
}
Raw Files
A raw Windows filelisting by parsing the NTFS
file system using the
ntfs crate to recursively walk the files
and directories on the system. If hashing or PE
parsing is enabled this will
also read the file contents. Raw Files
also supports decompressing compressed
files with the WofCompression
alternative data stream (ADS) attribute.
Since a filelisting can be extremely large every 100k entries artemis
will
output the data and then continue.
Other Parsers:
- Any tool that parse the NTFS file system
References:
TOML Collection
system = "windows"
[output]
name = "ntfs_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "rawfiles"
[artifacts.rawfiles]
drive_letter = 'C'
start_path = "C:\\"
depth = 20
recover_indx = false
# Optional
metadata = true # Get PE metadata
# Optional
md5 = false
# Optional
sha1 = false
# Optional
sha256 = false
# Optional
metadata = false
# Optional
path_regex = ""
# Optional
filename_regex = ""
Collection Options
drive_letter
Drive letter to use to parse the NTFS file system. This configuration is requiredstart_path
Directory to start walking the filesystem. This configuration is requireddepth
How many directories to descend from thestart_path
. Must be a postive number. Max value is 255. This configuration is requiredrecover_indx
Boolean value to carve deleted entries from the$INDX
attribute. Can show evidence of deleted filesmetadata
Get PE data fromPE
files. This configuration is optional. Default is falsemd5
Boolean value to enable MD5 hashing on all files. This configuration is optional. Default is falsesha1
Boolean value to enable SHA1 hashing on all files. This configuration is optional. Default is falsesha256
Boolean value to enable SHA256 hashing on all files. This configuration is optional. Default is falsepath_regex
Only descend into paths (directories) that match the provided regex. This configuration is optional. Default is no Regexfile_regex
Only return entres that match the provided regex. This configuration is optional. Default is no Regex
Output Structure
An array of WindowsRawFileInfo
entries
export interface RawFileInfo {
/**Full path to file or directory */
full_path: string;
/**Directory path */
directory: string;
/**Filename */
filename: string;
/**Extension of file if any */
extension: string;
/**Created timestamp in UNIXEPOCH seconds */
created: number;
/**Modified timestamp in UNIXEPOCH seconds */
modified: number;
/**Changed timestamp in UNIXEPOCH seconds */
changed: number;
/**Accessed timestamp in UNIXEPOCH seconds */
accessed: number;
/**Filename created timestamp in UNIXEPOCH seconds */
filename_created: number;
/**Filename modified timestamp in UNIXEPOCH seconds */
filename_modified: number;
/**Filename accessed timestamp in UNIXEPOCH seconds */
filename_accessed: number;
/**Filename changed timestamp in UNIXEPOCH seconds */
filename_changed: number;
/**Size of file in bytes */
size: number;
/**Size of file if compressed */
compressed_size: number;
/**Compression type used on file */
compression_type: string;
/**Inode entry */
inode: number;
/**Sequence number for entry */
sequence_number: number;
/**Parent MFT reference for entry */
parent_mft_references: number;
/**Attributes associated with entry */
attributess: string[];
/**MD5 of file. Optional */
md5: string;
/**SHA1 of file. Optional */
sha1: string;
/**SHA256 of file. Optional */
sha256: string;
/**Is the entry a file */
is_file: boolean;
/**Is the entry a directory */
is_directory: boolean;
/** Is the entry carved from $INDX */
is_indx: boolean;
/**USN entry */
usn: number;
/**SID number associated with entry */
sid: number;
/**SID string associated with entry*/
user_sid: string;
/**Group SID associated with enry */
group_sid: string;
/**Drive letter */
drive: string;
/**ADS info associated with entry */
ads_info: AdsInfo[];
/**Depth the file from provided start point*/
depth: number;
/**PE binary metadata. Optional */
binary_info: PeInfo[];
}
/**
* Alternative Data Streams (ADS) are a NTFS feature to embed data in another data stream
*/
export interface AdsInfo {
/**Name of the ADS entry */
name: string;
/**Size of the ADS entry */
size: number;
}
RecycleBin
Windows RecycleBin
is a special folder on Windows that stores files deleted
using the Explorer GUI. When a file is deleted (via Explorer) two types files
are generated in the RecycleBin
:
- Files that begin with
$I<number>.<original extension>
. Contains metadata about deleted file - Files that begin with
$R<number>.<original extension>
. Contents of deleted file
Currently artemis
supports parsing the $I
based files using the standard
Windows APIs to read the file for parsing. It does not try to recover files that
have been deleted/emptied from the RecycleBin
Other parsers:
References:
TOML Collection
system = "windows"
[output]
name = "recyclebin_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "recyclebin"
[artifacts.recyclebin]
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingRecycleBin
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of RecycleBin
entries
export interface RecycleBin {
/**Size of deleted file */
size: number;
/**Deleted timestamp of file in UNIXEPOCH seconds */
deleted: number;
/**Name of deleted file */
filename: string;
/**Full path to the deleted file */
full_path: string;
/**Directory associated with deleted file */
directory: string;
/**SID associated with the deleted file */
sid: string;
/**Path to the file in the Recycle Bin */
recycle_path: string;
}
Registry
Windows Registry
is a collection of binary files that store Windows
configuration settings and OS information. There are multiple Registry
files
on a system such as:
C:\Windows\System32\config\SYSTEM
C:\Windows\System32\config\SOFTWARE
C:\Windows\System32\config\SAM
C:\Windows\System32\config\SECURITY
C:\Users\%\NTUSER.DAT
C:\Users\%\AppData\Local\Microsoft\Windows\UsrClass.dat
Other Parser:
References:
TOML Collection
system = "windows"
[output]
name = "registry_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "registry" # Parses the whole Registry file
[artifacts.registry]
user_hives = true # All NTUSER.DAT and UsrClass.dat
system_hives = true # SYSTEM, SOFTWARE, SAM, SECURITY
# Optional
# alt_drive = 'D'
# Optional
# path_regex = "" # Registry is converted to lowercase before all comparison operations. So any regex input will also be converted to lowercase
Collection Options
user_hives
Parse all userRegistry
filesNTUSER.DAT
andUsrClass.dat
. This configuration is requiredsystem_hives
Parse all systemRegistry
filesSYSTEM
,SAM
,SOFTWARE
,SECURITY
. This configuration is requiredalt_drive
Use an alternative driver. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)path_regex
Only returnRegistry
keys that match providedpath_regex
. All comparisons are first converted to lowercase. This configuration is optional. Default is no Regex
Output Structure
An array of RegistryData
entries for each parsed Registry
file
export interface RegistryData {
/**Path to Registry file */
registry_path: string;
/**Registry file name */
registry_file: string;
/**Array of Registry entries */
registry_entries: Registry[];
}
/**
* Inteface representing the parsed `Registry` structure
*/
export interface Registry {
/**
* Full path to `Registry` key and name.
* Ex: ` ROOT\...\CurrentVersion\Run`
*/
path: string;
/**
* Path to Key
* Ex: ` ROOT\...\CurrentVersion`
*/
key: string;
/**
* Key name
* Ex: `Run`
*/
name: string;
/**
* Values associated with key name
* Ex: `Run => Vmware`. Where Run is the `key` name and `Vmware` is the value name
*/
values: Value[];
/**Timestamp of when the path was last modified */
last_modified: number;
/**Depth of key name */
depth: number;
}
/**
* The value data associated with Registry key
* References:
* https://github.com/libyal/libregf
* https://github.com/msuhanov/regf/blob/master/Windows%20registry%20file%20format%20specification.md
*/
export interface Value {
/**Name of Value */
value: string;
/**
* Data associated with value. All types are strings by default. The real type can be determined by `data_type`.
* `REG_BINARY` is a base64 encoded string
*/
data: string;
/**
* Value type.
* Full list of types at: https://learn.microsoft.com/en-us/windows/win32/sysinfo/registry-value-types
*/
data_type: string;
}
Scheduled Tasks
Windows Scheduled Tasks
are a common form of persistence on Windows systems.
There are two (2) types of Scheduled Task
files:
- XML based files
- Job based files
artemis
supports both formats. Starting on Windows Vista and higher XML files
are used for Scheduled Tasks
.
Other Parsers:
- Any XML reader
- Velociraptor
(Only supports XML
Scheduled Tasks
)
References:
TOML Collection
system = "windows"
[output]
name = "tasks_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "tasks"
[artifacts.tasks]
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingScheduled Tasks
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
Collection of TaskData
export interface TaskData {
/**Array of `TaskXml` parsed XML files */
tasks: TaskXml[];
/**Array of `TaskJob` parsed Job files */
jobs: TaskJob[];
}
/**
* JSON representation of the Task XML schema.
* Most of the schema is Optional. Only `Actions` is required
*/
export interface TaskXml {
/**Registration Info about the Task */
registrationInfo?: RegistrationInfo;
/**Triggers that start the Task */
triggers?: Triggers;
/**Settings for the Task */
settings?: Settings;
/**Base64 encoded raw binary data associated with the Task */
data?: string;
/**Principal user information related to the Task */
principals?: Principals;
/**Actions executed by the Task */
actions: Actions;
/**Path to the XML file */
path: string;
}
/**
* Parsed information about the Job file
*/
export interface TaskJob {
/**ID associated with the Task */
job_id: string;
/**Error retry count for the Task */
error_retry_count: number;
/**Error retry interval for the Task */
error_retry_interval: number;
/**Idle deadlin for Task */
idle_deadline: number;
/**Idle wait for Task */
idle_wait: number;
/**Task Priority */
priority: string;
/**Max run time for Task */
max_run_time: number;
/**Task Exit code */
exit_code: number;
/**Task Status */
status: string;
/**Flags associated with Task */
flags: string[];
/**Last run time for Task in LOCALTIME */
system_time: string;
/**Running count for Task */
running_instance_count: number;
/**Application name associated with Task */
application_name: string;
/**Parameters for application */
parameters: string;
/**Working directory associated with Task */
working_directory: string;
/**Creator of Task */
author: string;
/**Comments associated with Task */
comments: string;
/**Base64 encoded User data associatd with Task */
user_data: string;
/**Start Error associated with Task */
start_error: number;
/**Triggers that start the Task */
triggers: JobTriggers[];
/**Path to Job file */
path: string;
}
/**
* Triggers associated with Job file
*/
interface JobTriggers {
/**Task start date */
start_date: string;
/**Task end date */
end_date: string;
/**Task start time */
start_time: string;
/**Task duration */
duration: number;
/**Task interval */
interval_mins: number;
/**Array of trigger flags */
flags: string[];
/**Array of trigger types */
types: string[];
}
/**
* Registration Info related to Task XML
*/
interface RegistrationInfo {
/**URI associated with */
uri?: string;
/**SID associated with Task */
sid?: string;
/**Source of Task */
source?: string;
/**Creation OR Modification of Task */
date?: string;
/**Creator of Task */
author?: string;
/**Version level of Task */
version?: string;
/**User-friendly description of Task */
description?: string;
/**URI of external documentation for Task */
documentation?: string;
}
/**
* Triggers that active the Task
*/
interface Triggers {
/**Boot triggers for Task */
boot: BootTrigger[];
/**Regirstration triggers for Task. Format is exactly same as BootTriger*/
registration: BootTrigger[];
/**Idle triggers for Task */
idle: IdleTrigger[];
/**Time triggers for Task */
time: TimeTrigger[];
/**Event triggers for Task */
event: EventTrigger[];
/**Logon triggers for Task */
logon: LogonTrigger[];
/**Session triggers for Task */
session: SessionTrigger[];
/**Calendar triggers for Task */
calendar: CalendarTrigger[];
/**Windows Notifications triggers for Trask */
wnf: WnfTrigger[];
}
/**
* Most Triggers have a collection of common options
*/
interface BaseTriggers {
/**ID for trigger */
id?: string;
/**Start date for Task */
start_boundary?: string;
/**End date for Task */
end_boundary?: string;
/**Bool value to activate Trigger */
enabled?: boolean;
/**Time limit for Task */
execution_time_limit?: string;
/**Repetition for Task */
repetition?: Repetition;
}
/**
* Repetition Options for Triggers
*/
interface Repetition {
/**Trigger restart intervals */
interval: string;
/**Repetition can stop after duration has elapsed */
duration?: string;
/**Task can stop at end of duration */
stop_at_duration_end?: boolean;
}
/**
* Boot options to Trigger Task
*/
interface BootTrigger {
/**Base Triggers associated with Boot */
common?: BaseTriggers;
/**Task delayed after boot */
delay?: string;
}
/**
* Idle options to Trigger Task
*/
interface IdleTrigger {
/**Base Triggers associated with Idle */
common?: BaseTriggers;
}
/**
* Time options to Trigger Task
*/
interface TimeTrigger {
/**Base Triggers associated with Time */
common?: BaseTriggers;
/**Delay time for `start_boundary` */
random_delay?: string;
}
/**
* Event options to Trigger Task
*/
interface EventTrigger {
/**Base Triggers associated with Event */
common?: BaseTriggers;
/**Array of subscriptions that can Trigger the Task */
subscription: string[];
/**Delay to Trigger the Task */
delay?: string;
/**Trigger can start Task after `number_of_occurrences` */
number_of_occurrences?: number;
/**Trigger can start Task after `period_of_occurrence` */
period_of_occurrence?: string;
/**Specifies XML field name */
matching_element?: string;
/**Specifies set of XML elements */
value_queries?: string[];
}
/**
* Logon options to Trigger Task
*/
interface LogonTrigger {
/**Base Triggers associated with Logon */
common?: BaseTriggers;
/**Account name associated with Logon Trigger */
user_id?: string;
/**Delay Logon Task Trigger */
delay?: string;
}
/**
* Session options to Trigger Task
*/
interface SessionTrigger {
/**Base Triggers associated with Session */
common?: BaseTriggers;
/**Account name associated with Session Trigger */
user_id?: string;
/**Delay Session Task Trigger */
delay?: string;
/**Session change that Triggers Task */
state_change?: string;
}
/**
* Windows Notification options to Trigger Task
*/
interface WnfTrigger {
/**Base Triggers associated with Windows Notification */
common?: BaseTriggers;
/**Notification State name */
state_name: string;
/**Delay Notification Trigger Task */
delay?: string;
/**Data associated with Notification Trigger */
data?: string;
/**Offset associated with Notification Trigger */
data_offset?: string;
}
/**
* Calendar Options to Trigger Task
*/
interface CalendarTrigger {
/**Base Triggers associated with Calendar */
common?: BaseTriggers;
/**Delay Calendar Trigger Task */
random_delay?: string;
/**Run Task on every X number of days */
schedule_by_day?: ByDay;
/**Run Task on every X number of weeks */
schedule_by_week?: ByWeek;
/**Run Task on specific days of month */
schedule_by_month?: ByMonth;
/**Run Task on specific weeks on specific days */
schedule_by_month_day_of_week?: ByMonthDayWeek;
}
/**
* How often to run Task by days
*/
interface ByDay {
/**Run Task on X number of days. Ex: Two (2) means every other day */
days_interval?: number;
}
/**
* How often to run Task by Weeks
*/
interface ByWeek {
/**Run Task on X number of weeks. Ex: Two (2) means every other week */
weeks_interval?: number;
/**Runs on specified days of the week. Ex: Monday, Tuesday */
days_of_week?: string[];
}
/**
* How often to run Task by Months
*/
interface ByMonth {
/**Days of month to run Task */
days_of_month?: string[];
/**Months to run Task. Ex: July, August */
months?: string[];
}
/**How often to run Tasks by Months and Weeks */
interface ByMonthDayWeek {
/**Weeks of month to run Task */
weeks?: string[];
/**Days of month to run Task */
days_of_week?: string[];
/**Months to run Task */
months?: string[];
}
/**
* Settings determine how to run Task Actions
*/
interface Settings {
/**Start Task on demans */
allow_start_on_demand?: boolean;
/**Restart if fails */
restart_on_failure?: RestartType;
/**Determines how Windows handles multiple Task executions */
multiple_instances_policy?: string;
/**Disable Task on battery power */
disallow_start_if_on_batteries?: boolean;
/**Stop Task if going on battery power */
stop_if_going_on_batteries?: boolean;
/**Task can be terminated if time limts exceeded */
allow_hard_terminate?: boolean;
/**If scheduled time is missed, Task may be started */
start_when_available?: boolean;
/**Run based on network profile name */
newtork_profile_name?: string;
/**Run only if network connection available */
run_only_if_network_available?: boolean;
/**Wake system from standby or hibernate to run */
wake_to_run?: boolean;
/**Task is enabled */
enabled?: boolean;
/**Task is hidden from console or GUI */
hidden?: boolean;
/**Delete Task after specified duration and no future run times */
delete_expired_tasks_after?: string;
/**Options to run when Idle */
idle_settings?: IdleSettings;
/**Network settings to run */
network_settings?: NetworkSettings;
/**Taks execution time limit */
execution_time_limit?: string;
/**Task Priority. Lowest is 1. Highest is 10 */
priority?: number;
/**Only run if system is Idle */
run_only_if_idle?: boolean;
/**Use unified scheduling engine to handle Task execution */
use_unified_scheduling_engine?: boolean;
/**Task is disabled on Remote App Sessions */
disallow_start_on_remote_app_session?: boolean;
/**Options to run Task during system maintence periods */
maintence_settings?: MaintenceSettings;
/**Task disabled on next OS startup */
volatile?: boolean;
}
/**
* Restart on failure options
*/
interface RestartType {
/**Duration between restarts */
interval: string;
/**Number of restart attempts */
count: number;
}
/**
* Idle options
*/
interface IdleSettings {
/**Task may be delayed up until specified duration */
duration?: string;
/**Task will wait for system to become idle */
wait_timeout?: string;
/**Task stops if system is no longer Idle */
stop_on_idle_end?: boolean;
/**Task restarts when system returns to Idle */
restart_on_idle?: boolean;
}
/**
* Network options
*/
interface NetworkSettings {
/**Task runs only on specified network name */
name?: string;
/**GUID associated with `NetworkSettings` */
id?: string;
}
/**
* Maintence options
*/
interface MaintenceSettings {
/**Duration of maintence */
period: string;
/**Deadline for Task to run */
deadline?: string;
/**Task can run idependently of other Tasks with `MaintenceSettings` */
exclusive?: boolean;
}
/**
* SID data associated with Task
*/
interface Principals {
/**Principal name for running the Task */
user_id?: string;
/**Determines if Task run on logon */
logon_type?: string;
/**Group ID associated with Task. Task can be triggered by anyone in Group ID */
group_id?: string;
/**Friendly name of the principal */
display_name?: string;
/**Privilege level of Task */
run_level?: string;
/**Process Token SID associated with Task */
process_token_sid_type?: string;
/**Array of privlege value */
required_privileges?: string[];
/**Unique user selected ID */
id_attribute?: string;
}
/**
* Actions run by the Task
*/
interface Actions {
/**Executes one or more commands */
exec: ExecType[];
/**COM handler to execute */
com_handler: ComHandlerType[];
/**Send an email */
send_email: SendEmail[];
/**Display a message */
show_message: Message[];
}
/**
* Command options
*/
interface ExecType {
/**Command to execute */
command: string;
/**Arguements for command */
arguments?: string;
/**Path to a directory */
working_directory?: string;
}
/**
* COM options
*/
interface ComHandlerType {
/**COM GUID */
class_id: string;
/**XML data for COM */
data?: string;
}
/**
* SendEmail options
*/
interface SendEmail {
/**Email server domain */
server?: string;
/**Subject of email */
subject?: string;
/**Who should received email */
to?: string;
/**Who should be CC'd */
cc?: string;
/**Who should be BCC'd */
bcc?: string;
/**Reply to email address */
reply_to?: string;
/**The sender email address */
from: string;
/**Custom header fields to include in email */
header_fields?: Record<string, string>;
/**Email message body */
body?: string;
/**List of files to be attached */
attachment?: string[];
}
/**
* Message options
*/
interface Message {
/**Title of message */
title?: string;
/**Message body */
body: string;
}
Search
Windows Search
is an indexing service for tracking files and content on
Windows.
Search
can parse a large amount of metadata (properties) for each entry it
indexes. It has almost 600 different types of properties it can parse. It can
even index part of the contents of a file.
Search
can index large parts of the file system, so parsing the Search
database can provide a partial file listing of the system. Search
is disabled
on Windows Servers and starting on newer versions of Windows 11 it can be stored
in three (3) SQLITE databases (previously was a single ESE database)
The Search
database can get extremely large (4GB+ sizes have been seen). The
larger the ESE database the more resources artemis
needs to parse the data.
Similar to the filelisting artifact, every 100k entries artemis
will output
the data and then continue.
Other parsers:
References:
TOML Collection
system = "windows"
[output]
name = "search_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "search"
[artifacts.search]
# Optional
# alt_path = "C:\ProgramData\Microsoft\Search\Data\Applications\Windows\Windows.edb"
Collection Options
alt_path
An alternative path to theSearch
ESE or SQLITE database. This configuration is optional. By defaultartemis
will use%systemdrive%\ProgramData\Microsoft\Search\Data\Applications\Windows\Windows.edb
Output Structure
An array of SearchEntry
entries
export interface SearchEntry {
/**Index ID for row */
document_id: number;
/**Search entry name */
entry: string;
/**Search entry last modified in UNIXEPOCH seconds */
last_modified: number;
/**
* JSON object representing the properties associated with the entry
*
* Example:
* ```
* "properties": {
"3-System_ItemFolderNameDisplay": "Programs",
"4429-System_IsAttachment": "0",
"4624-System_Search_AccessCount": "0",
"4702-System_VolumeId": "08542f90-0000-0000-0000-501f00000000",
"17F-System_DateAccessed": "k8DVxD162QE=",
"4392-System_FileExtension": ".lnk",
"4631F-System_Search_GatherTime": "7B6taj962QE=",
"5-System_ItemTypeText": "Shortcut",
"4184-System_ComputerName": "DESKTOP-EIS938N",
"15F-System_DateModified": "EVHzDyR22QE=",
"4434-System_IsFolder": "0",
"4365-System_DateImported": "ABKRqWyI1QE=",
"4637-System_Search_Store": "file",
"4373-System_Document_DateSaved": "EVHzDyR22QE=",
"4448-System_ItemPathDisplayNarrow": "Firefox (C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs)",
"4559-System_NotUserContent": "0",
"33-System_ItemUrl": "file:C:/ProgramData/Microsoft/Windows/Start Menu/Programs/Firefox.lnk",
"4447-System_ItemPathDisplay": "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Firefox.lnk",
"13F-System_Size": "7QMAAAAAAAA=",
"4441-System_ItemFolderPathDisplayNarrow": "Programs (C:\\ProgramData\\Microsoft\\Windows\\Start Menu)",
"0-InvertedOnlyPids": "cBFzESgSZRI=",
"4443-System_ItemNameDisplay": "Firefox.lnk",
"4442-System_ItemName": "Firefox.lnk",
"14F-System_FileAttributes": "32",
"4403-System_FolderNameDisplay": "Cygwin",
"4565-System_ParsingName": "Firefox.lnk",
"4456-System_Kind": "bGluawBwcm9ncmFt",
"27F-System_Search_Rank": "707406378",
"16F-System_DateCreated": "UUZNqWyI1QE=",
"4440-System_ItemFolderPathDisplay": "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs",
"4397-System_FilePlaceholderStatus": "6",
"4465-System_Link_TargetParsingPath": "C:\\Program Files\\Mozilla Firefox\\firefox.exe",
"4431-System_IsEncrypted": "0",
"4457-System_KindText": "Link; Program",
"4444-System_ItemNameDisplayWithoutExtension": "Firefox",
"11-System_FileName": "Firefox.lnk",
"4623-System_SFGAOFlags": "1078002039",
"0F-InvertedOnlyMD5": "z1gPcor92OaNVyAAzRdOsw==",
"4371-System_Document_DateCreated": "ABKRqWyI1QE=",
"4633-System_Search_LastIndexedTotalTime": "0.03125",
"4396-System_FileOwner": "Administrators",
"4438-System_ItemDate": "ABKRqWyI1QE=",
"4466-System_Link_TargetSFGAOFlags": "1077936503",
"4450-System_ItemType": ".lnk",
"4678-System_ThumbnailCacheId": "DzpSS6gn5yg="
}
* ```
*/
properties: Record<string, string>;
}
Services
Windows Services
are a common form of persistence and privilege escalation on
Windows systems. Service data is stored in the SYSTEM Registry file.
Services
run with SYSTEM level privileges.
Other Parsers:
- Any tool that can read the Registry
- Velociraptor
References:
TOML Collection
system = "windows"
[output]
name = "services_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "services"
[artifacts.services]
alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingServices
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of Services
entries
export interface Services {
/**Current State of the Service */
state: string;
/**Name of Service */
name: string;
/**Display name of Service */
display_name: string;
/**Service description */
description: string;
/**Start mode of Service */
start_mode: string;
/**Path to executable for Service */
path: string;
/**Service types. Ex: KernelDriver */
service_type: string[];
/**Account associated with Service */
account: string;
/**Registry modified timestamp in UNIXEPOCH seconds. May be used to determine when the Service was created */
modified: number;
/**DLL associated with Service */
service_dll: string;
/**Service command upon failure */
failure_command: string;
/**Reset period associated with Service */
reset_period: number;
/**Service actions upon failure */
failure_actions: FailureActions[];
/**Privileges associated with Service */
required_privileges: string[];
/**Error associated with Service */
error_control: string;
/**Registry path associated with Service */
reg_path: string;
}
/**
* Failure actions executed when Service fails
*/
interface FailureActions {
/**Action executed upon failure */
action: string;
/**Delay in seconds on failure */
delay: number;
}
Shellbags
Windows Shellbags
are Registry
entries that track what directories a user
has browsed via Explorer GUI. These entries are stored in the undocumented
ShellItem
binary format.
artemis
supports parsing the most common types of shellitems
, but if you
encounter a shellitem
entry that is not supported please open an issue!
Other parsers:
References:
TOML Collection
system = "windows"
[output]
name = "shellbags_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shellbags"
[artifacts.shellbags]
resolve_guids = true
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingShellbags
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)resolve_guids
Boolean value whether to try to resolve GUIDS found when parsingShellbags
.- If false:
"resolve_path": "20d04fe0-3aea-1069-a2d8-08002b30309d\C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current",
- If true:
"resolve_path": "This PC\C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current",
- If false:
Output Structure
An array of Shellbag
entries
export interface Shellbags {
/**Reconstructed directory path */
path: string;
/**FAT created timestamp. Only applicable for Directory `shell_type` */
created: number;
/**FAT modified timestamp. Only applicable for Directory `shell_type` */
modified: number;
/**FAT modified timestamp. Only applicable for Directory `shell_type` */
accessed: number;
/**Entry number in MFT. Only applicable for Directory `shell_type` */
mft_entry: number;
/**Sequence number in MFT. Only applicable for Directory `shell_type` */
mft_sequence: number;
/**
* Type of shellitem
*
* Can be:
* `Directory, URI, RootFolder, Network, Volume, ControlPanel, UserPropertyView, Delegate, Variable, MTP, Unknown, History`
*
* Most common is typically `Directory`
*/
shell_type: string;
/**
* Reconstructed directory with any GUIDs resolved
* Ex: `20d04fe0-3aea-1069-a2d8-08002b30309d` to `This PC`
*/
resolve_path: string;
/**User Registry file associated with `Shellbags` */
reg_file: string;
/**Registry key path to `Shellbags` data */
reg_path: string;
/**Full file path to the User Registry file */
reg_file_path: string;
}
Shimcache
Windows Shimcache
(also called: AppCompatCache
,
Application Compatability Cache
, or AppCompat
) are Registry
entries that
may* indicate application execution. These entries are only written
when the system is shutdown or restarted.
* While an entry in Shimcache
often implies the application was
executed, Windows may pre-populate Shimcache
with entries based on a user
browsing to a directory that contains an application.
Other parsers:
References:
TOML Collection
system = "windows"
[output]
name = "shimcache_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
# Optional
# alt_drive = 'D'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingShimcache
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of Shimcache
entries
export interface Shimcache {
/**Entry number for shimcache. Entry zero (0) is most recent execution */
entry: number;
/**Full path to application file */
path: string;
/**Standard Information Modified timestamp in UNIXEPOCH seconds */
last_modified: number;
/**Full path to the Registry key */
key_path: string;
}
ShimDB
Windows Shimdatabase (ShimDB
) can be used by Windows applications to provided
compatability between Windows versions.
It does this via shims
that are inserted into the application that modifies
function calls. Malicious custom shims can be created as a form of persistence.
OtherParsers:
References:
TOML Collection
system = "windows"
[output]
name = "sdb_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shimdb"
[artifacts.shimdb]
# Optional
# alt_drive = 'D'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingShimDB
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of ShimDB
entries
export interface Shimdb {
/**Array of `TAGS` associated with the index tag*/
indexes: TagData[];
/**Data associated with the Shimdb */
db_data: DatabaseData;
/**Path to parsed sdb file */
sdb_path: string;
}
/**
* SDB files are composed of `TAGS`. There are multiple types of `TAGS`
* `data` have `TAGS` that can be represented via a JSON object
* `list_data` have `TAGS` that can be rerpesented as an array of JSON objects
*
* Example:
* ```
* "data": {
* "TAG_FIX_ID": "4aeea7ee-44f1-4085-abc2-6070eb2b6618",
* "TAG_RUNTIME_PLATFORM": "37",
* "TAG_NAME": "256Color"
* },
* "list_data": [
* {
* "TAG_NAME": "Force8BitColor",
* "TAG_SHIM_TAGID": "169608"
* },
* {
* "TAG_SHIM_TAGID": "163700",
* "TAG_NAME": "DisableThemes"
* }
* ]
* ```
*
* See https://www.geoffchappell.com/studies/windows/win32/apphelp/sdb/index.htm for complete list of `TAGS`
*/
export interface TagData {
/**TAGs represented as a JSON object */
data: Record<string, string>;
/**Array of TAGS represented as a JSON objects */
list_data: Record<string, string>[];
}
/**
* Metadata related to the SDB file
*/
export interface DatabaseData {
/**SDB version info */
sdb_version: string;
/**Compile timestamp of the SDB file in UNIXEPOCH seconds */
compile_time: number;
/**Compiler version info */
compiler_version: string;
/**Name of SDB */
name: string;
/**Platform ID */
platform: number;
/**ID associated with SDB */
database_id: string;
/**
* The SDB file may contain additional metadata information
* May include additional `TAGS`
*/
additional_metdata: Record<string, string>;
/**Array of `TAGS` associated with the SDB file */
list_data: TagData[];
}
Shortcuts
Windows Shotcut
files (.lnk
) are files that point to another file. They
often contain a large amount of metadata related to the target file. Shortcut
files can be used to distribute malware and can also provide evidence of file
interaction. The directory at
C:\Users\%\AppData\Roaming\Microsoft\Windows\Recent
contains multiple
Shortcuts
that point to files recently opened by the user.
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "shortcuts_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shortcuts"
[artifacts.shortcuts]
path = "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup"
Collection Options
path
Target path whereartemis
should parseShortcut
files. This configuration is required
Output Structure
A Shortcut
object structure
export interface Shortcut {
/**Path to `shortcut (lnk)` file */
source_path: string;
/**Flags that specify what data structures are in the `lnk` file */
data_flags: string[];
/**File attributes of target file */
attribute_flags: string[];
/**Standard Information created timestamp of target file */
created: number;
/**Standard Information accessed timestamp of target file */
accessed: number;
/**Standard Information modified timestamp of target file */
modified: number;
/**Size in bytes of target file */
file_size: number;
/**Flag associated where target file is located. On volume or network share */
location_flags: string;
/**Path to target file */
path: string;
/**Serial associated with volume if target file is on drive */
drive_serial: string;
/**Drive type associated with volume if target file is on drive */
drive_type: string;
/**Name of volume if target file is on drive */
volume_label: string;
/**Network type if target file is on network share */
network_provider: string;
/**Network share name if target file is on network share */
network_share_name: string;
/**Network share device name if target file is on network share */
network_device_name: string;
/**Description of shortcut (lnk) file */
description: string;
/**Relative path to target file */
relative_path: string;
/**Directory of target file */
working_directory: string;
/**Command args associated with target file */
command_line_args: string;
/**Icon path associated with shortcut (lnk) file */
icon_location: string;
/**Hostname of target file */
hostname: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
droid_volume_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
droid_file_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
birth_droid_volume_id: string;
/**
* Digital Record Object Identification (DROID) used to track lnk file
*/
birth_droid_file_id: string;
/**Shellitems associated with shortcut (lnk) file */
shellitems: ShellItems[];
/**Array of property stores */
properties: Record<string, string | number | boolean | null>[];
/**Environmental variable data in shortcut */
environment_variable: string;
/**Console metadata in shortcut */
console: Console[];
/**Windows Codepage in shortcut */
codepage: number;
/**Special folder ID in shortcut */
special_folder_id: number;
/**macOS Darwin ID in shortcut */
darwin_id: string;
/**Shim layer entry in shortcut */
shim_layer: string;
/**Known folder GUID in shortcut */
known_folder: string;
}
/**
* Console metadata embeded in Shortcut file
*/
interface Console {
/**Colors for Console */
color_flags: string[];
/**Additional colors for Console */
pop_fill_attributes: string[];
/**Console width buffer size */
screen_width_buffer_size: number;
/**Console height buffer size */
screen_height_buffer_size: number;
/**Console window width */
window_width: number;
/**Console window height */
window_height: number;
/**Console X coordinate */
window_x_coordinate: number;
/**Console Y coordinate */
window_y_coordinate: number;
/**Console font size */
font_size: number;
/**Console font family */
font_family: string;
/**Conesole font weight */
font_weight: string;
/**Console font name */
face_name: string;
/**Console cursor size */
cursor_size: string;
/**Is full screen set (boolean) */
full_screen: number;
/**Insert mode */
insert_mode: number;
/**Automatic position set (boolean) */
automatic_position: number;
/**Console history buffer size */
history_buffer_size: number;
/**Console number of bufffers */
number_history_buffers: number;
/**Duplicates allowed in history */
duplicates_allowed_history: number;
/**Base64 encoded color table. */
color_table: string;
}
SRUM
Windows System Resource Utilization Monitor (SRUM
) is a service that tracks
application resource usage. The service tracks application data such as time
running, bytes sent, bytes received, energy usage, and lots more.
This service was introduced in Windows 8 and is stored in an ESE database at
C:\Windows\System32\sru\SRUDB.dat
. On Windows 8 some of the data can be found
in the Registry too (temporary storage before writing to SRUDB.dat), but in
later versions of Windows the data is no longer in the Registry.
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "srum_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "srum"
[artifacts.srum]
# Optional
# alt_path = "C:\Windows\System32\srum\SRUDB.dat"
Collection Options
alt_path
An alternative path to theSRUM
ESE database. This configuration is optional. By defaultartemis
will use%systemdrive%\Windows\System32\srum\SRUDB.dat
Output Structure
An array of entries based on each SRUM
table
/**
* SRUM table associated with application executions `{D10CA2FE-6FCF-4F6D-848E-B2E99266FA89}`
*/
export interface ApplicationInfo {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Foreground Cycle time for application */
foreground_cycle_time: number;
/**Background Cycle time for application */
background_cycle_time: number;
/**Facetime for application */
facetime: number;
/**Count of foreground context switches */
foreground_context_switches: number;
/**Count of background context switches */
background_context_switches: number;
/**Count of foreground bytes read */
foreground_bytes_read: number;
/**Count of background bytes read */
foreground_bytes_written: number;
/**Count of foreground read operations */
foreground_num_read_operations: number;
/**Count of foreground write operations */
foreground_num_write_options: number;
/**Count of foreground flushes */
foreground_number_of_flushes: number;
/**Count of background bytes read */
background_bytes_read: number;
/**Count of background write operations */
background_bytes_written: number;
/**Count of background read operations */
background_num_read_operations: number;
/**Count of background write operations */
background_num_write_operations: number;
/**Count of background flushes */
background_number_of_flushes: number;
}
/**
* SRUM table associated with the timeline of an application's execution `{D10CA2FE-6FCF-4F6D-848E-B2E99266FA86}`
*/
export interface ApplicationTimeline {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Flags associated with entry */
flags: number;
/**End time of entry */
end_time: number;
/**Duration of timeline in microseconds */
duration_ms: number;
/**Span of timeline in microseconds */
span_ms: number;
/**Timeline end for entry */
timeline_end: number;
/**In focus value for entry */
in_focus_timeline: number;
/**User input value for entry */
user_input_timeline: number;
/**Comp rendered value for entry */
comp_rendered_timeline: number;
/**Comp dirtied value for entry */
comp_dirtied_timeline: number;
/**Comp propaged value for entry */
comp_propagated_timeline: number;
/**Audio input value for entry */
audio_in_timeline: number;
/**Audio output value for entry */
audio_out_timeline: number;
/**CPU value for entry */
cpu_timeline: number;
/**Disk value for entry */
disk_timeline: number;
/**Network value for entry */
network_timeline: number;
/**MBB value for entry */
mbb_timeline: number;
/**In focus seconds count */
in_focus_s: number;
/**PSM foreground seconds count */
psm_foreground_s: number;
/**User input seconds count */
user_input_s: number;
/**Comp rendered seconds countr */
comp_rendered_s: number;
/**Comp dirtied seconds count */
comp_dirtied_s: number;
/**Comp propagated seconds count */
comp_propagated_s: number;
/**Audio input seconds count */
audio_in_s: number;
/**Audio output seconds count */
audio_out_s: number;
/**Cycles value for entry */
cycles: number;
/**Cycles breakdown value for entry */
cycles_breakdown: number;
/**Cycles attribute value for entry */
cycles_attr: number;
/**Cycles attribute breakdown for entry */
cycles_attr_breakdown: number;
/**Cycles WOB value for entry */
cycles_wob: number;
/**Cycles WOB breakdown value for entry */
cycles_wob_breakdown: number;
/**Disk raw value for entry */
disk_raw: number;
/**Network tail raw value for entry */
network_tail_raw: number;
/**Network bytes associated with entry*/
network_bytes_raw: number;
/**MBB tail raw value for entry */
mbb_tail_raw: number;
/**MBB bytes associated with entry */
mbb_bytes_raw: number;
/**Display required seconds count */
display_required_s: number;
/**Display required timeline value for entry */
display_required_timeline: number;
/**Keyboard input timeline value for entry */
keyboard_input_timeline: number;
/**Keybouard input seconds count */
keyboard_input_s: number;
/**Mouse input seconds count */
mouse_input_s: number;
}
/**
* SRUM table associated with VFU `{7ACBBAA3-D029-4BE4-9A7A-0885927F1D8F}`. Unsure what this tracks.
*/
export interface AppVfu {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Flags associated with VFU entry */
flags: number;
/**Start time in UNIXEPOCH seconds associated with VFU entry */
start_time: number;
/**End time in UNIXEPOCH seconds associated with VFU entry */
end_time: number;
/**Base64 encoded usage data associated with VFU entry */
usage: string;
}
/**
* SRUM table associated witn EnergyInfo `{DA73FB89-2BEA-4DDC-86B8-6E048C6DA477}`
*/
export interface EnergyInfo {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Base64 encoded binary data associated witn EnegyInfo entry */
binary_data: string;
}
/**
* SRUM table associated with EnergyUsage `{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}` and `{FEE4E14F-02A9-4550-B5CE-5FA2DA202E37}LT`
*/
export interface EnergyUsage {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Event Timestamp in UNIXEPOCH seconds */
event_timestamp: number;
/**State transition associated with entry */
state_transition: number;
/**Full charged capacity associated with entry */
full_charged_capacity: number;
/**Designed capacity associated with entry */
designed_capacity: number;
/** Charge level associated with entry */
charge_level: number;
/**Cycle count associated with entry */
cycle_count: number;
/**Configuration hash associated with entry */
configuration_hash: number;
}
/**
* SRUM table associated with NetworkInfo `{973F5D5C-1D90-4944-BE8E-24B94231A174}`
*/
export interface NetworkInfo {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Interface luid associated with entry */
interface_luid: number;
/**L2 profile ID associated with entry */
l2_profile_id: number;
/**L2 profiel flags associated with entry */
l2_profile_flags: number;
/**Bytes sent associated with entry */
bytes_sent: number;
/**Bytes received associated with entry */
bytes_recvd: number;
}
/**
* SRUM table associated with NetworkConnectivityInfo `{DD6636C4-8929-4683-974E-22C046A43763}`
*/
export interface NetworkConnectivityInfo {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Interface luid associated with entry */
interface_luid: number;
/**L2 profile ID associated with entry */
l2_profile_id: number;
/**Connected time associated with entry */
connected_time: number;
/*Connect start time associated with entry in UNIXEPOCH seconds*/
connect_start_time: number;
/**L2 profile flags associated with entry */
l2_profile_flags: number;
}
/**
* SRUM table associated with NotificationInfo `{D10CA2FE-6FCF-4F6D-848E-B2E99266FA86}`
*/
export interface NotificationInfo {
/**ID in for row in the ESE table */
auto_inc_id: number;
/**Timestamp when ESE table was updated in UNIXEPOCH seconds */
timestamp: number;
/**Application name */
app_id: string;
/**SID associated with the application process */
user_id: string;
/**Notification type associated with entry */
notification_type: number;
/**Size of payload associated with entry */
payload_size: number;
/**Network type associated with entry */
network_type: number;
}
SystemInfo
Gets system metadata associated with the endpoint
Other Parsers:
- Any tool that calls the Windows API or queries system information
Refernces:
- N/A
TOML Collection
system = "windows"
[output]
name = "systeminfo_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "systeminfo"
Collection Options
- N/A
Output Structure
A SystemInfo
object structure
export interface SystemInfo {
/**Boot time for endpoint */
boot_time: number;
/**Endpoint hostname */
hostname: string;
/**Endpoint OS version */
os_version: string;
/**Uptime of endpoint */
uptime: number;
/**Endpoint kernel version */
kernel_version: string;
/**Endpoint platform */
platform: string;
/**CPU information */
cpu: Cpus[];
/**Disks information */
disks: Disks[];
/**Memory information */
memory: Memory;
/**Performance information */
performance: LoadPerformance;
}
/**
* CPU information on endpoint
*/
export interface Cpus {
/**CPU frequency */
frequency: number;
/**CPU usage on endpoint */
cpu_usage: number;
/**Name of CPU */
name: string;
/**Vendor ID for CPU */
vendor_id: string;
/**CPU brand */
brand: string;
/**Core Count */
physical_core_count: number;
}
/**
* Disk information on endpoint
*/
export interface Disks {
/**Type of disk */
disk_type: string;
/**Filesystem for disk */
file_system: string;
/**Disk mount point */
mount_point: string;
/**Disk storage */
total_space: number;
/**Storage remaining */
available_space: number;
/**If disk is removable */
removable: boolean;
}
/**
* Memory information on endpoint
*/
export interface Memory {
/**Available memory on endpoint */
available_memory: number;
/**Free memory on endpoint */
free_memory: number;
/**Free swap on endpoint */
free_swap: number;
/**Total memory on endpoint */
total_memory: number;
/**Total swap on endpoint */
total_swap: number;
/**Memory in use */
used_memory: number;
/**Swap in use */
used_swap: number;
}
/**
* Average CPU load. These values are always zero (0) on Windows
*/
export interface LoadPerformance {
/**Average load for one (1) min */
avg_one_min: number;
/**Average load for five (5) min */
avg_five_min: number;
/**Average load for fifteen (15) min */
avg_fifteen_min: number;
}
UserAssist
Windows UserAssist
is a Registry artifact that records applications executed
via Windows Explorer. These entries are typically ROT13 encoded (though this can
be disabled).
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "userassist_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "userassist"
[artifacts.userassist]
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingUserAssist
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of UserAssist
entries
export interface UserAssist {
/**Path of executed application */
path: string;
/**Last execution time of application in UNIXEPOCH seconds */
last_execution: number;
/**Number of times executed */
count: number;
/**Registry path to UserAssist entry */
reg_path: string;
/**ROT13 encoded path */
rot_path: string;
/**Path of executed application with folder description GUIDs resolved */
folder_path: string;
}
Users
Gets user info from SAM Registry file
Other Parsers:
- Any tool that queries user info
References:
- N/A
TOML Collection
system = "windows"
[output]
name = "users_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "users"
[artifacts.users]
# Optional
# alt_drive = 'C'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsing theSAM
file. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of UserInfo
entries
export interface UserInfo {
/**Last logon for account */
last_logon: number;
/**Time when password last set in UNIXEPOCH seconds */
password_last_set: number;
/**Last password failure in UNIXEPOCH seconds */
last_password_failure: number;
/**Relative ID for account. Typically last number of SID */
relative_id: number;
/**Primary group ID for account */
primary_group_id: number;
/**UAC flags associated with account */
user_account_control_flags: string[];
/**Country code for account */
country_code: number;
/**Code page for account */
code_page: number;
/**Number of password failures associated with account */
number_password_failures: number;
/**Number of logons for account */
number_logons: number;
/**Username for account */
username: string;
/**SID for account */
sid: string;
}
UsnJrnl
Windows UsnJrnl
is a sparse binary file that tracks changes to files and
directories. Located at the alternative data stream (ADS)
C:\$Extend\$UsnJrnl:$J
. Parsing this data can sometimes show files that have
been deleted. However, depending on the file activity on the system entries in
the UsnJrnl
may get overwritten quickly.
Other Parsers:
References:
TOML Collection
system = "windows"
[output]
name = "usnjrnl_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "usnjrnl"
[artifacts.usnjrnl]
# Optional
# alt_drive = 'D'
Collection Options
alt_drive
Expects a single character value. Will use an alternative drive letter when parsingUsnJrnl
. This configuration is optional. By defaultartemis
will use the%systemdrive%
value (typicallyC
)
Output Structure
An array of UsnJrnl
entries
export interface UsnJrnl {
/**Entry number in the MFT */
mft_entry: number;
/**Sequence number in the MFT */
mft_sequence: number;
/**Parent entry number in the MFT */
parent_mft_entry: number;
/**Parent sequence number in the MFT */
parent_mft_sequence: number;
/**ID number in the Update Sequence Number Journal (UsnJrnl) */
update_sequence_number: number;
/**Timestamp of of entry update in UNIXEPOCH seconds */
update_time: number;
/**Reason for update action */
update_reason: string;
/**Source information of the update */
update_source_flags: string;
/**Security ID associated with entry */
security_descriptor_id: number;
/**Attributes associate with entry */
file_attributes: string[];
/**Name associated with entry. Can be file or directory */
filename: string;
/**Extension if available for filename */
extension: string;
/**Full path for the UsnJrnl entry. Obtained by parsing `$MFT` and referencing the `parent_mft_entry` */
full_path: string;
}
macOS
Currently artemis
has been tested on macOS Catalina and higher. Similar to the
Windows version a main focus point of the library artemis-core
is to make a
best effort to not rely on the macOS APIs. Since artemis-core
is a forensic
focused library, we do not want to rely on APIs from a potentially compromised
system.
However, artemis-core
does use the macOS API for a handful of artifacts:
Processes
- The sysinfo crate is used to pull a process listing using macOS APIsSysteminfo
- The sysinfo crate is also to get system information using macOS APIs
Cron
Cron
is an application that lets users create jobs on an endpoint. It is
common on Unix, Linux, and macOS systems. A Cron
job can be configured to
execute a command on at a specific time. It is a popular form of persistence on
supported systems.
Other parsers:
- Any program that read a text file
Refernces:
TOML Collection
system = "macos" # or "linux"
[output]
name = "cron_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "cron"
Collection Options
- N/A
Output Structure
An array of Cron
entries.
export interface Cron {
/**What hour should cron job run. * means every hour */
hour: string;
/**What minute should cron job run. * means every minute */
min: string;
/**What day should cron job run. * means every day */
day: string;
/**What month should cron job run. * means every month */
month: string;
/**What weekday should cron job run. * means every day */
weekday: string;
/**Command to execute when cron job is triggered */
command: string;
}
Emond
macOS Event Monitor Daemon (Emond
) is a srvices that allows users to register
rules to perform actions when specific events are triggered, for example "system
startup". Emond
can be leveraged to acheive persistence on macOS. Starting on
macOS Ventura (13) emond
has been removed.
Other Parsers:
- None
References:
TOML Collection
system = "macos"
[output]
name = "emond_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "emond"
Collection Options
- N/A
Output Structure
An array of Emond
entries
export interface Emond {
/**Name of `Emond` rule */
name: string;
/**Is rule enabled */
enabled: boolean;
/**Event types associated with the rule */
event_types: string[];
/**Start time of the rule */
start_tiem: string;
/**If partial criteria match should trigger the rule */
allow_partial_criterion_match: boolean;
/**Array of commad actions if rule is triggered */
command_actions: Command[];
/**Array of log actions if rule is triggered */
log_actions: Log[];
/**Array of send email actions if rule is triggered */
send_email_actions: SendEmailSms[];
/**Array of send sms actions if rule is triggered. Has same structure as send email */
send_sms_actions: SendEmailSms[];
/**Criteria for the `Emond` rule */
criterion: Record<string, unknown>[];
/**Variables associated with the criterion */
variables: Record<string, unknown>[];
/**If the emond client is enabled */
emond_clients_enabled: boolean;
}
/**
* Commands to execute if rule is triggered
*/
interface Command {
/**Command name */
command: string;
/**User associated with command */
user: string;
/**Group associated with command */
group: string;
/**Arguments associated with command */
arguments: string[];
}
/**
* Log settings if rule is triggered
*/
interface Log {
/**Log message content */
message: string;
/**Facility associated with log action */
facility: string;
/**Level of log */
log_level: string;
/**Log type */
log_type: string;
/**Parameters associated with log action */
parameters: Record<string, unknown>;
}
/**
* Email or SMS to send if rule is triggered
*/
interface SendEmailSms {
/**Content of the email/sms */
message: string;
/**Subject of the email/sms */
subject: string;
/**Path to local binary */
localization_bundle_path: string;
/**Remote URL to send the message */
relay_host: string;
/**Email associated with email/sms action */
admin_email: string;
/**Targerts to receive email/sms */
recipient_addresses: string[];
}
ExecPolicy
macOS Execution Policy (ExecPolicy
) tracks application execution on a system.
It only tracks execution of applications that tracked by GateKeeper
Other Parsers:
- Any SQLITE viewer
References:
TOML Collection
system = "macos"
[output]
name = "execpolicy_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "execpolicy"
Collection Options
- N/A
Output Structure
An array of ExecPolicy
entries
export interface ExecPolicy {
/**Is file signed */
is_signed: number;
/**File ID name */
file_identifier: string;
/**App bundle ID */
bundle_identifier: string;
/**Bundle version */
bundle_version: string;
/**Team ID */
team_identifier: string;
/**Signing ID */
signing_identifier: string;
/**Code Directory hash*/
cdhash: string;
/**SHA256 hash of application */
main_executable_hash: string;
/**Executable timestamp in UNIXEPOCH seconds */
executable_timestamp: number;
/**Size of file */
file_size: number;
/**Is library */
is_library: number;
/**Is file used */
is_used: number;
/**File ID associated with entry */
responsible_file_identifier: string;
/**Is valid entry */
is_valid: number;
/**Is quarantined entry */
is_quarantined: number;
/**Timestamp for executable measurements in UNIXEPOCH seconds */
executable_measurements_v2_timestamp: number;
/**Reported timestamp in UNIXEPOCH seconds */
reported_timstamp: number;
/**Primary key */
pk: number;
/**Volume UUID for entry */
volume_uuid: string;
/**Object ID for entry */
object_id: number;
/**Filesystem type */
fs_type_name: string;
/**App Bundle ID */
bundle_id: string;
/**Policy match for entry */
policy_match: number;
/**Malware result for entry */
malware_result: number;
/**Flags for entry */
flags: number;
/**Modified time in UNIXEPOCH seconds */
mod_time: number;
/**Policy scan cache timestamp in UNIXEPOCH seconds */
policy_scan_cache_timestamp: number;
/**Revocation check timestamp in UNIXEPOCH seconds */
revocation_check_time: number;
/**Scan version for entry */
scan_version: number;
/**Top policy match for entry */
top_policy_match: number;
}
Files
A regular macOS filelisting. artemis
uses the
walkdir crate to recursively walk the files
and directories on the system. This artifact will fail on any System Integrity
Protection (SIP) protected files. Since a filelisting can be extremely large
every 100k entries artemis
will output the data and then continue.
Other Parsers:
- Any tool that can recursively list files and directories
References:
- N/A
TOML Collection
system = "macos"
[output]
name = "files_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "files" # Name of artifact
[artifacts.files]
start_path = "/usr/bin" # Start of file listing
# Optional
depth = 5 # How many sub directories to descend
# Optional
metadata = true # Get executable metadata
# Optional
md5 = true # MD5 all files
# Optional
sha1 = false # SHA1 all files
# Optional
sha256 = false # SHA256 all files
# Optional
path_regex = "" # Regex for paths
# Optional
file_regex = "" # Regex for files
Collection Options
start_path
Where to start the file listing. Must exist on the endpoint. To start at root use/
. This configuration is requireddepth
Specify how many directories to descend from thestart_path
. Default is one (1). Must be a postive number. Max value is 255. This configuration is optionalmetadata
Get Macho data fromMacho
files. This configuration is optional. Default is falsemd5
Boolean value to enable MD5 hashing on all files. This configuration is optional. Default is falsesha1
Boolean value to enable SHA1 hashing on all files. This configuration is optional. Default is falsesha256
Boolean value to enable SHA256 hashing on all files. This configuration is optional. Default is falsepath_regex
Only descend into paths (directories) that match the provided regex. This configuration is optional. Default is no Regexfile_regex
Only return entres that match the provided regex. This configuration is optional. Default is no Regex
Output Structure
An array of MacosFileInfo
entries
export interface MacosFileInfo {
/**Full path to file or directory */
full_path: string;
/**Directory path */
directory: string;
/**Filename */
filename: string;
/**Extension of file if any */
extension: string;
/**Created timestamp in UNIXEPOCH seconds */
created: number;
/**Modified timestamp in UNIXEPOCH seconds */
modified: number;
/**Changed timestamp in UNIXEPOCH seconds */
changed: number;
/**Accessed timestamp in UNIXEPOCH seconds */
accessed: number;
/**Size of file in bytes */
size: number;
/**Inode associated with entry */
inode: number;
/**Mode of file entry */
mode: number;
/**User ID associated with file */
uid: number;
/**Group ID associated with file */
gid: number;
/**MD5 of file */
md5: string;
/**SHA1 of file */
sha1: string;
/**SHA256 of file */
sha256: string;
/**Is the entry a file */
is_file: boolean;
/**Is the entry a directory */
is_directory: boolean;
/**Is the entry a symbolic links */
is_symlink: boolean;
/**Depth the file from provided start point */
depth: number;
/**Macho binary metadata */
binary_info: MachoInfo[];
}
Fsevents
macOS Filesystem Events (FsEvents
) track changes to files on a macOS system
(similar to UsnJrnl
on Windows). Parsing this data can sometimes show files
that have been deleted. Resides at /System/Volumes/Data/.fseventsd/
or
/.fseventsd
on older systems. artemis
will try to parse both locations by
default.
Other Parsers:
References:
TOML Collection
system = "macos"
[output]
name = "fsevents_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "fseventsd"
Collection Options
- N/A
Output Structure
An array of Fsevents
entries
export interface Fsevents {
/**Flags associated with FsEvent record */
flags: string[];
/**Full path to file associated with FsEvent record */
path: string;
/**Node ID associated with FsEvent record */
node: number;
/**Event ID associated with FsEvent record */
event_id: number;
}
Groups
Gets group info parsing the plist
files at
/var/db/dslocal/nodes/Default/groups
.
Other Parsers:
- Any tool that can parse a
plist
file
References:
- N/A
TOML Collection
system = "macos"
[output]
name = "groups_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "groups"
Collection Options
- N/A
Output Structure
An array of Groups
entries
export interface Groups {
/**GID for the group */
gid: string[];
/**Name of the group */
name: string[];
/**Real name associated with the group */
real_name: string[];
/**Users associated with group */
users: string[];
/**Group members in the group */
groupmembers: string[];
/**UUID associated with the group */
uuid: string[];
}
Launchd
macOS launch daemons (launchd
) are the most common way to register
applications for persistence on macOS. launchd
can be registered for a singler
user or system wide. artemis
will try to parse all knownlaunchd
locations by
default.
/Users/%/Library/LaunchDaemons/
/Users/%/Library/LaunchAgents/
/System/Library/LaunchDaemons/
/Library/Apple/System/Library/LaunchDaemons/
/System/Library/LaunchAgents/
/Library/Apple/System/Library/LaunchAgents/
Other Parsers:
- Any tool that can parse a
plist
file
References:
- launchd
man launchd.plist
TOML Collection
system = "macos"
[output]
name = "launchd_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "launchd"
Collection Options
- N/A
Output Structure
An array of Launchd
entries
export interface Launchd {
/**JSON representation of launchd plist contents */
launchd_data: Record<string, unknown>;
/**Full path of the plist file */
plist_path: string;
}
Loginitems
macOS LoginItems
are a form of persistence on macOS systems. They are
triggered when a user logs on to the system. They are located at:
/Users/%/Library/Application Support/com.apple.backgroundtaskmanagementagent/backgrounditems.btm
(pre-Ventura)/var/db/com.apple.backgroundtaskmanagement/BackgroundItems-v4.btm
(Ventura+)
Both areplist
files, however the actualLoginItem
data is in an additional binary format known as aBookmark
that needs to be parsed.
Other Parsers:
References:
TOML Collection
system = "macos"
[output]
name = "loginitems_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "loginitems"
Collection Options
- N/A
Output Structure
An array of LoginItem
entries
export interface LoginItems {
/**Path to file to run */
path: string[];
/**Path represented as Catalog Node ID */
cnid_path: number[];
/**Created timestamp of target file in UNIXEPOCH seconds */
created: number;
/**Path to the volume of target file */
volume_path: string;
/**Target file URL type */
volume_url: string;
/**Name of volume target file is on */
volume_name: string;
/**Volume UUID */
volume_uuid: string;
/**Size of target volume in bytes */
volume_size: number;
/**Created timestamp of volume in UNIXEPOCH seconds */
volume_created: number;
/**Volume Property flags */
volume_flag: number[];
/**Flag if volume if the root filesystem */
volume_root: boolean;
/**Localized name of target file */
localized_name: string;
/**Read-Write security extension of target file */
security_extension_rw: string;
/**Read-Only security extension of target file */
security_extension_ro: string;
/**File property flags */
target_flags: number[];
/**Username associated with `Bookmark` */
username: string;
/**Folder index number associated with target file */
folder_index: number;
/**UID associated with `LoginItem` */
uid: number;
/**`LoginItem` creation flags */
creation_options: number;
/**Is `LoginItem` bundled in app */
is_bundled: boolean;
/**App ID associated with `LoginItem` */
app_id: string;
/**App binary name */
app_binary: string;
/**Is target file executable */
is_executable: boolean;
/**Does target file have file reference flag */
file_ref_flag: boolean;
/**Path to `LoginItem` source */
source_path: string;
}
Macho
macOS Mach object (macho
) is the executable format for applications on macOS.
artemis
is able to parse basic metadata from macho
files.
Other Parsers:
References:
TOML Collection
There is no way to collect just macho
data with artemis
instead it is an
optional feature for the macOS filelisting
and processes
artifacts.
However, it is possible to directly parse macho
files by using JavaScript
.
See the scripts chapter for examples.
Configuration Optaions
N/A
Output Structure
An array of macho
entries
export interface MachoInfo {
/**CPU arch */
cpu_type: string;
/**CPU model */
cpu_subtype: string;
/**File type, ex: executable, dylib, object, core, etc*/
filetype: string;
/**Segments of the macho binary */
sgements: Segment64[];
/**Dynamic libraries in the macho binary */
dylib_commands: DylibCommand[];
/**Macho binary id */
id: string;
/**Macho team id */
team_id: string;
/**Parsed out macho entitlements from plist */
entitlements: Record<string, unknown>;
/**Base64 encoded embedded certs within the binary */
certs: string;
/**Minium OS binary can run on */
minos: string;
/**SDK version macho was compiled for */
sdk: string;
}
/**
* Metadata about macho Segments
*/
export interface Segment64 {
/**Name of segment */
name: string;
/**Virtual memory address */
vmaddr: number;
/**Virtual memory size */
vmsize: number;
/**Offset in the macho binary */
file_offset: number;
/**Size of segment */
file_size: number;
/**Maxmimum permitted memory protection */
max_prot: number;
/**Initial memory protection */
init_prot: number;
/**Number of sections in the semgent */
nsects: number;
/**Segment flags */
flags: number;
/**Array of section data */
sections: Sections[];
}
/**
* Metadata about macho Sections
*/
export interface Sections {
/**Name of section */
section_name: string;
/**Name of segment the section belongs to */
segment_name: string;
/**Virtual memory address */
addr: number;
/**Size of section */
size: number;
/**Section offset in file */
offset: number;
/**Section byte alignment */
align: number;
/**File offset to relocation entries */
relocation_offset: number;
/**Number of relocation entries */
number_relocation_entries: number;
/**Flags related to the section */
flags: number;
/**Reserved */
reserved: number;
/**Reserved */
reserved2: number;
/**Reserved */
reserved3: number;
}
/**
* Metadata about macho dylibcommand
*/
export interface DylibCommand {
/**Name of dynamic library */
name: string;
/**Timestamp when the library was build */
timestamp: number;
/**Version of dynamic library */
current_version: number;
/**Compatiblity version of dynamic library */
compatibility_version: string;
}
Plist
macOS property lists (plist
) are the primary format for application
configurations. The contents of plists
can be: xml, json, or binary. XML is
most common.
TOML Collection
There is no way to collect plist
data with artemis
instead it is an feature
for scripting. See the scripts chapter for
examples.
Configuration Optaions
N/A
Output Structure
A JSON representation of the plist
contents
Record<String, unknown>;
Processes
Gets a standard process listing using the macOS API
Other Parsers:
- Any tool that calls the macOS API
References:
- N/A
TOML Collection
system = "macos"
[output]
name = "process_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "processes" # Name of artifact
[artifacts.processes]
# Get executable metadata
metadata = true
# MD5 hash process binary
md5 = true
# SHA1 hash process binary
sha1 = false
# SHA256 hash process binary
sha256 = false
Collection Options
metadata
Get Macho data from process binary.md5
Boolean value to MD5 hash process binarysha1
Boolean value to SHA1 hash process binarysha256
Boolean value to SHA256 hash process binary
Output Structure
An array of MacosProcessInfo
entries
export interface MacosProcessInfo {
/**Full path to the process binary */
full_path: string;
/**Name of process */
name: string;
/**Path to process binary */
path: string;
/** Process ID */
pid: number;
/** Parent Process ID */
ppid: number;
/**Environment variables associated with process */
environment: string;
/**Status of the process */
status: string;
/**Process arguments */
arguments: string;
/**Process memory usage */
memory_usage: number;
/**Process virtual memory usage */
virtual_memory_usage: number;
/**Process start time in UNIXEPOCH seconds*/
start_time: number;
/** User ID associated with process */
uid: string;
/**Group ID associated with process */
gid: string;
/**MD5 hash of process binary */
md5: string;
/**SHA1 hash of process binary */
sha1: string;
/**SHA256 hash of process binary */
sha256: string;
/**MACHO metadata asssociated with process binary */
binary_info: MachoInfo[];
}
Shell History
Many Unix and Linux like systems provide a shell interface that allows a user to execute a command or application. Many of these shell interfaces keep a record of the command executed and depending on the configuration the timestamp when the command was executed. Popular shells include:
- bash
- zsh
- fish
- sh
- PowerShell
Artemis
supports parsing zsh
and bash
shell history. In addition, it
supports parsing Python
history (despite not being a shell).
Other parsers:
- Any program that read a text file
References:
TOML Collection
system = "macos" # or "linux"
[output]
name = "shellhistory_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shell_history"
Collection Options
- N/A
Output Structure
An array of BashHistory
for bash
data, ZshHistory
for zsh
data, and
PythonHistory
for Python
data per user.
export interface BashHistory {
/**Array of lines associated with `.bash_history` file */
history: BashData[];
/**Path to `.bash_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.bash_history`
*/
export interface BashData {
/**Line entry */
history: string;
/**Timestamp associated with line entry in UNIXEPOCH. Timestamps are **optional** in `.bash_history`, zero (0) is returned for no timestamp */
timestamp: number;
/**Line number */
line: number;
}
export interface ZshHistory {
/**Array of lines associated with `.zs_history` file */
history: ZshData[];
/**Path to `.bash_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.zsh_history`
*/
export interface ZshData {
/**Line entry */
history: string;
/**Timestamp associated with line entry in UNIXEPOCH. Timestamps are **optional** in `.zsh_history`, zero (0) is returned for no timestamp */
timestamp: number;
/**Line number */
line: number;
/**Duration of command */
duration: number;
}
export interface PythonHistory {
/**Array of lines associated with `.python_history` file */
history: PythonData[];
/**Path to `.python_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.python_history`
*/
export interface PythonData {
/**Line entry */
history: string;
/**Line number */
line: number;
}
Sudo Logs
Unix SudoLogs
are the log files associated with sudo execution. Sudo ("super
user do" or "substitute user") is used to run programs with elevated
privileges.
macOS SudoLogs
are stored in the Unified Log files.
Linux SudoLogs
are stored in the Systemd Journal files.
The log entries show evidence of commands executed with elevated privileges
Other Parsers:
- None
References:
- N/A
TOML Collection
system = "maco" # or "linux"
[output]
name = "sudologs_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "sudologs"
Collection Options
- N/A
Output Structure
On a macOS system sudologs
return an array of UnifiedLog
entries
export interface UnifiedLog {
/**Subsystem used by the log entry */
subsystem: string;
/**Library associated with the log entry */
library: string;
/**Log entry category */
category: string;
/**Process ID associated with log entry */
pid: number;
/**Effective user ID associated with log entry */
euid: number;
/**Thread ID associated with log entry */
thread_id: number;
/**Activity ID associated with log entry */
activity_id: number;
/**UUID of library associated with the log entry */
library_uuid: string;
/**UNIXEPOCH timestamp of log entry in nanoseconds */
time: number;
/**Log entry event type */
event_type: string;
/**Log entry log type */
log_type: string;
/**Process associated with log entry */
process: string;
/**UUID of process associated with log entry */
process_uuid: string;
/**Raw string message associated with log entry*/
raw_message: string;
/**Boot UUID associated with log entry */
boot_uuid: string;
/**Timezone associated with log entry */
timezone_name: string;
/**Strings associated with the log entry */
mesage_entries: Record<string, string | number>;
/**
* Resolved message entry associated log entry.
* Merge of `raw_message` and `message_entries`
*/
message: string;
}
SystemInfo
Gets system metadata associated with the endpoint
Other Parsers:
- Any tool that calls the macOS API or queries system information
Refernces:
- N/A
TOML Collection
system = "macos"
[output]
name = "systeminfo_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "systeminfo"
Collection Options
- N/A
Output Structure
A SystemInfo
object structure
export interface SystemInfo {
/**Boot time for endpoint */
boot_time: number;
/**Endpoint hostname */
hostname: string;
/**Endpoint OS version */
os_version: string;
/**Uptime of endpoint */
uptime: number;
/**Endpoint kernel version */
kernel_version: string;
/**Endpoint platform */
platform: string;
/**CPU information */
cpu: Cpus[];
/**Disks information */
disks: Disks[];
/**Memory information */
memory: Memory;
/**Performance information */
performance: LoadPerformance;
}
/**
* CPU information on endpoint
*/
export interface Cpus {
/**CPU frequency */
frequency: number;
/**CPU usage on endpoint */
cpu_usage: number;
/**Name of CPU */
name: string;
/**Vendor ID for CPU */
vendor_id: string;
/**CPU brand */
brand: string;
/**Core Count */
physical_core_count: number;
}
/**
* Disk information on endpoint
*/
export interface Disks {
/**Type of disk */
disk_type: string;
/**Filesystem for disk */
file_system: string;
/**Disk mount point */
mount_point: string;
/**Disk storage */
total_space: number;
/**Storage remaining */
available_space: number;
/**If disk is removable */
removable: boolean;
}
/**
* Memory information on endpoint
*/
export interface Memory {
/**Available memory on endpoint */
available_memory: number;
/**Free memory on endpoint */
free_memory: number;
/**Free swap on endpoint */
free_swap: number;
/**Total memory on endpoint */
total_memory: number;
/**Total swap on endpoint */
total_swap: number;
/**Memory in use */
used_memory: number;
/**Swap in use */
used_swap: number;
}
/**
* Average CPU load
*/
export interface LoadPerformance {
/**Average load for one (1) min */
avg_one_min: number;
/**Average load for five (5) min */
avg_five_min: number;
/**Average load for fifteen (15) min */
avg_fifteen_min: number;
}
UnifiedLogs
macOS unifiedlogs
are the primary files associated with logging system
activity. They are stored in a binary format at /var/db/diagnostics/
.
Other Parsers:
- UnifiedLogReader (Only partial support)
References:
TOML Collection
system = "macos"
[output]
name = "unifiedlogs_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "unifiedlogs"
[artifacts.unifiedlogs]
sources = ["Special"]
Collection Options
sources
List of directories that should be included when parsing theunifiedlogs
. These directories are found at/var/db/diagnostics/
. Only the following directories contain logs:- Persist
- Special
- Signpost
- HighVolume
To parse all logs you would use
sources = ["Special", "Persist", "Signpost", "HighVolume"]
Output Structure
An array of UnifiedLog
entries
export interface UnifiedLog {
/**Subsystem used by the log entry */
subsystem: string;
/**Library associated with the log entry */
library: string;
/**Log entry category */
category: string;
/**Process ID associated with log entry */
pid: number;
/**Effective user ID associated with log entry */
euid: number;
/**Thread ID associated with log entry */
thread_id: number;
/**Activity ID associated with log entry */
activity_id: number;
/**UUID of library associated with the log entry */
library_uuid: string;
/**UNIXEPOCH timestamp of log entry in nanoseconds */
time: number;
/**Log entry event type */
event_type: string;
/**Log entry log type */
log_type: string;
/**Process associated with log entry */
process: string;
/**UUID of process associated with log entry */
process_uuid: string;
/**Raw string message associated with log entry*/
raw_message: string;
/**Boot UUID associated with log entry */
boot_uuid: string;
/**Timezone associated with log entry */
timezone_name: string;
/**Strings associated with the log entry */
mesage_entries: Record<string, string | number>;
/**
* Resolved message entry associated log entry.
* Merge of `raw_message` and `message_entries`
*/
message: string;
}
Users
Gets user info parsing the plist
files at
/var/db/dslocal/nodes/Default/users
.
Other Parsers:
- Any tool that can parse a
plist
file
References:
- N/A
TOML Collection
system = "macos"
[output]
name = "users_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "users"
Collection Options
- N/A
Output Structure
An array of Users
entries
export interface Users {
/**UID for the user */
uid: string[];
/**GID associated with the user */
gid: string[];
/**User name */
name: string[];
/**Real name associated with user */
real_name: string[];
/**Base64 encoded photo associated with user */
account_photo: string[];
/**Timestamp the user was created in UNIXEPOCH seconds */
account_created: number;
/**Password last changed for the user in UNIXEPOCH seconds */
password_last_set: number;
/**Shell associated with the user */
shell: string[];
/**Unlock associated with the user */
unlock_options: string[];
/**Home path associated with user */
home_path: string[];
/**UUID associated with user */
uuid: string[];
Linux
Currently artemis
has been tested on Ubuntu 18.04 and higher, Fedora, and Arch Linux. Similar to the Windows and macOS versions a main focus point of the library artemis-core
is to make a
best effort to not rely on the macOS APIs. Since artemis-core
is a forensic
focused library, we do not want to rely on APIs from a potentially compromised
system.
However, artemis-core
does use the Linux API for a handful of artifacts:
Processes
- The sysinfo crate is used to pull a process listing using Linux APIsSysteminfo
- The sysinfo crate is also to get system information using Linux APIs
Cron
Cron
is an application that lets users create jobs on an endpoint. It is
common on Unix, Linux, and macOS systems. A Cron
job can be configured to
execute a command on at a specific time. It is a popular form of persistence on
supported systems.
Other parsers:
- Any program that read a text file
Refernces:
TOML Collection
system = "linux" # or "macos"
[output]
name = "cron_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "cron"
Collection Options
- N/A
Output Structure
An array of Cron
entries.
export interface Cron {
/**What hour should cron job run. * means every hour */
hour: string;
/**What minute should cron job run. * means every minute */
min: string;
/**What day should cron job run. * means every day */
day: string;
/**What month should cron job run. * means every month */
month: string;
/**What weekday should cron job run. * means every day */
weekday: string;
/**Command to execute when cron job is triggered */
command: string;
}
ELF
Linux Executable Linkable Format (ELF
) is the executable format for applications on Linux systems.
artemis
is able to parse basic metadata from ELF
files.
Other Parsers:
References:
TOML Collection
There is no way to collect just ELF
data with artemis
instead it is an
optional feature for the Linux filelisting
and processes
artifacts.
However, it is possible to directly parse ELF
files by using JavaScript
.
See the scripts chapter for examples.
Configuration Optaions
N/A
Output Structure
An array of ElfInfo
entries
export interface ElfInfo {
/**Array of symbols in ELF binary */
symbols: string[];
/**Array of sections in ELF binary */
sections: string[];
/**Machine type information in ELF binary */
machine_type: string;
}
Files
A regular Linux filelisting. artemis
uses the
walkdir crate to recursively walk the files
and directories on the system. Since a filelisting can be extremely large every
100k entries artemis
will output the data and then continue.
Other Parsers:
- Any tool that can recursively list files and directories
References:
- N/A
TOML Collection
system = "linux"
[output]
name = "files_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "files" # Name of artifact
[artifacts.files]
start_path = "/usr/bin" # Start of file listing
# Optional
depth = 5 # How many sub directories to descend
# Optional
metadata = true # Get executable metadata
# Optional
md5 = true # MD5 all files
# Optional
sha1 = false # SHA1 all files
# Optional
sha256 = false # SHA256 all files
# Optional
path_regex = "" # Regex for paths
# Optional
file_regex = "" # Regex for files
Collection Options
start_path
Where to start the file listing. Must exist on the endpoint. To start at root use/
. This configuration is requireddepth
Specify how many directories to descend from thestart_path
. Default is one (1). Must be a postive number. Max value is 255. This configuration is optionalmetadata
Get ELF data fromELF
files. This configuration is optional. Default is falsemd5
Boolean value to enable MD5 hashing on all files. This configuration is optional. Default is falsesha1
Boolean value to enable SHA1 hashing on all files. This configuration is optional. Default is falsesha256
Boolean value to enable SHA256 hashing on all files. This configuration is optional. Default is falsepath_regex
Only descend into paths (directories) that match the provided regex. This configuration is optional. Default is no Regexfile_regex
Only return entres that match the provided regex. This configuration is optional. Default is no Regex
Output Structure
An array of LinuxFileInfo
entries
export interface LinuxFileInfo {
/**Full path to file or directory */
full_path: string;
/**Directory path */
directory: string;
/**Filename */
filename: string;
/**Extension of file if any */
extension: string;
/**Created timestamp in UNIXEPOCH seconds */
created: number;
/**Modified timestamp in UNIXEPOCH seconds */
modified: number;
/**Changed timestamp in UNIXEPOCH seconds */
changed: number;
/**Accessed timestamp in UNIXEPOCH seconds */
accessed: number;
/**Size of file in bytes */
size: number;
/**Inode associated with entry */
inode: number;
/**Mode of file entry */
mode: number;
/**User ID associated with file */
uid: number;
/**Group ID associated with file */
gid: number;
/**MD5 of file */
md5: string;
/**SHA1 of file */
sha1: string;
/**SHA256 of file */
sha256: string;
/**Is the entry a file */
is_file: boolean;
/**Is the entry a directory */
is_directory: boolean;
/**Is the entry a symbolic links */
is_symlink: boolean;
/**Depth the file from provided start point */
depth: number;
/**ELF binary metadata */
binary_info: ElfInfo[];
}
Journals
Linux Journals
are the log files associated with the systemd service. Systemd
is a popular system service that is common on most Linux distros. The logs can
contain data related to application activity, sudo commands, and much more.
Other Parsers:
- None
References:
TOML Collection
system = "linux"
[output]
name = "journals_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "journals"
Collection Options
- N/A
Output Structure
An array of Journal
entries
export interface Journal {
/**User ID associated with entry */
uid: number;
/**Group ID associated with entry */
gid: number;
/**Process ID associated with entry */
pid: number;
/**Thread ID associated with entry */
thread_id: number;
/**Command associated with entry */
comm: string;
/**Priority associated with entry */
priority: string;
/**Syslog facility associated with entry */
syslog_facility: string;
/**Executable file associated with entry */
executable: string;
/**Cmdline args associated with entry */
cmdline: string;
/**Effective capabilities of process associated with entry */
cap_effective: string;
/**Session of the process associated with entry */
audit_session: number;
/**Login UID of the process associated with entry */
audit_loginuid: number;
/**Systemd Countrol Group associated with entry */
systemd_cgroup: string;
/**Systemd owner UID associated with entry */
systemd_owner_uid: number;
/**Systemd unit associated with entry */
systemd_unit: string;
/**Systemd user unit associated with entry */
systemd_user_unit: string;
/**Systemd slice associated with entry */
systemd_slice: string;
/**Sysemd user slice associated with entry */
systemd_user_slice: string;
/**Systemd invocation ID associated with entry */
systemd_invocation_id: string;
/**Kernel Boot ID associated with entry */
boot_id: string;
/**Machine ID of host associated with entry */
machine_id: string;
/**Hostname associated with entry */
hostname: string;
/**Runtime scope associated with entry */
runtime_scope: string;
/**Trused Timestamp associated with entry in UNIXEPOCH microseconds */
source_realtime: number;
/**Timestamp associated with entry in UNIXEPOCH microseconds */
realtime: number;
/**How entry was received by the Journal service */
transport: string;
/**Journal message entry */
message: string;
/**Message ID associated with Journal Catalog */
message_id: string;
/**Unit result associated with entry */
unit_result: string;
/**Code line for file associated with entry */
code_line: number;
/**Code function for file associated with entry */
code_function: string;
/**Code file associated with entry */
code_file: string;
/**User invocation ID associated with entry */
user_invocation_id: string;
/**User unit associated with entry */
user_unit: string;
/**
* Custom fields associated with entry.
* Example:
* ```
* "custom": {
* "_SOURCE_MONOTONIC_TIMESTAMP": "536995",
* "_UDEV_SYSNAME": "0000:00:1c.3",
* "_KERNEL_DEVICE": "+pci:0000:00:1c.3",
* "_KERNEL_SUBSYSTEM": "pci"
* }
* ```
*/
custom: Record<string, string>;
/**Sequence Number associated with entry */
seqnum: number;
}
Logons
Linux stores Logon
information in several different files depending on the
distro and software installed. Typically the following files contain logon
information on Linux:
- wtmp - Historical logons
- btmp - Failed logons
- utmp - Users currently logged on
In addition, Journal files may also contain logon information
Currently artemis
supports all three (3) files above when obtaining Logon
information. When collecting Logon
information artemis
will only parse:
wtmp, utmp, and btmp files.
If you want to check for logons in Journal
files, you can try to apply a
filter to the Journal
artifact
Other Parsers:
- N/A
References:
TOML Collection
system = "linux"
[output]
name = "logon_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "logon"
Collection Options
- N/A
Output Structure
An array of Logon
entries
export interface Logon {
/**Logon type for logon entry */
logon_type: string;
/**Process ID */
pid: number;
/** Terminal info */
terminal: string;
/**Terminal ID for logon entry */
terminal_id: number;
/**Username for logon */
username: string;
/**Hostname for logon source */
hostname: string;
/**Termination status for logon entry */
termination_status: number;
/**Exit status logon entry */
exit_status: number;
/**Session for logon entry */
session: number;
/**Timestamp for logon in UNIXEPOCH seconds */
timestamp: number;
/**Source IP for logon entry */
ip: string;
/**Status of logon entry: `Success` or `Failed` */
status: string;
}
Processes
Gets a standard process listing using the Linux API
Other Parsers:
- Any tool that calls the Linux API
References:
- N/A
TOML Collection
system = "Linux"
[output]
name = "process_collection"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "processes" # Name of artifact
[artifacts.processes]
# Get executable metadata
metadata = true
# MD5 hash process binary
md5 = true
# SHA1 hash process binary
sha1 = false
# SHA256 hash process binary
sha256 = false
Collection Options
metadata
Get ELF data from process binary.md5
Boolean value to MD5 hash process binarysha1
Boolean value to SHA1 hash process binarysha256
Boolean value to SHA256 hash process binary
Output Structure
An array of LinuxProcessInfo
entries
export interface LinuxProcessInfo {
/**Full path to the process binary */
full_path: string;
/**Name of process */
name: string;
/**Path to process binary */
path: string;
/** Process ID */
pid: number;
/** Parent Process ID */
ppid: number;
/**Environment variables associated with process */
environment: string;
/**Status of the process */
status: string;
/**Process arguments */
arguments: string;
/**Process memory usage */
memory_usage: number;
/**Process virtual memory usage */
virtual_memory_usage: number;
/**Process start time in UNIXEPOCH seconds*/
start_time: number;
/** User ID associated with process */
uid: string;
/**Group ID associated with process */
gid: string;
/**MD5 hash of process binary */
md5: string;
/**SHA1 hash of process binary */
sha1: string;
/**SHA256 hash of process binary */
sha256: string;
/**ELF metadata asssociated with process binary */
binary_info: ElfInfo[];
}
Shell History
Many Unix and Linux like systems provide a shell interface that allows a user to execute a command or application. Many of these shell interfaces keep a record of the command executed and depending on the configuration the timestamp when the command was executed. Popular shells include:
- bash
- zsh
- fish
- sh
- PowerShell
Artemis
supports parsing zsh
and bash
shell history. In addition, it
supports parsing Python
history (despite not being a shell).
Other parsers:
- Any program that read a text file
References:
TOML Collection
system = "macos" # or "linux"
[output]
name = "shellhistory_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "shell_history"
Collection Options
- N/A
Output Structure
An array of BashHistory
for bash
data, ZshHistory
for zsh
data, and
PythonHistory
for Python
data per user.
export interface BashHistory {
/**Array of lines associated with `.bash_history` file */
history: BashData[];
/**Path to `.bash_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.bash_history`
*/
export interface BashData {
/**Line entry */
history: string;
/**Timestamp associated with line entry in UNIXEPOCH. Timestamps are **optional** in `.bash_history`, zero (0) is returned for no timestamp */
timestamp: number;
/**Line number */
line: number;
}
export interface ZshHistory {
/**Array of lines associated with `.zs_history` file */
history: ZshData[];
/**Path to `.bash_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.zsh_history`
*/
export interface ZshData {
/**Line entry */
history: string;
/**Timestamp associated with line entry in UNIXEPOCH. Timestamps are **optional** in `.zsh_history`, zero (0) is returned for no timestamp */
timestamp: number;
/**Line number */
line: number;
/**Duration of command */
duration: number;
}
export interface PythonHistory {
/**Array of lines associated with `.python_history` file */
history: PythonData[];
/**Path to `.python_history` file */
path: string;
/**User directory name */
user: string;
}
/**
* History data associated with `.python_history`
*/
export interface PythonData {
/**Line entry */
history: string;
/**Line number */
line: number;
}
Sudo Logs
Unix SudoLogs
are the log files associated with sudo execution. Sudo ("super
user do" or "substitute user") is used to run programs with elevated
privileges.
macOS SudoLogs
are stored in the Unified Log files.
Linux SudoLogs
are stored in the Systemd Journal files.
The log entries show evidence of commands executed with elevated privileges
Other Parsers:
- None
References:
- N/A
TOML Collection
system = "linux" # or "macos"
[output]
name = "sudologs_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "sudologs"
Collection Options
- N/A
Output Structure
On a Linux system SudoLogs
return an array of Journal
entries
export interface Journal {
/**User ID associated with entry */
uid: number;
/**Group ID associated with entry */
gid: number;
/**Process ID associated with entry */
pid: number;
/**Thread ID associated with entry */
thread_id: number;
/**Command associated with entry */
comm: string;
/**Priority associated with entry */
priority: string;
/**Syslog facility associated with entry */
syslog_facility: string;
/**Executable file associated with entry */
executable: string;
/**Cmdline args associated with entry */
cmdline: string;
/**Effective capabilities of process associated with entry */
cap_effective: string;
/**Session of the process associated with entry */
audit_session: number;
/**Login UID of the process associated with entry */
audit_loginuid: number;
/**Systemd Countrol Group associated with entry */
systemd_cgroup: string;
/**Systemd owner UID associated with entry */
systemd_owner_uid: number;
/**Systemd unit associated with entry */
systemd_unit: string;
/**Systemd user unit associated with entry */
systemd_user_unit: string;
/**Systemd slice associated with entry */
systemd_slice: string;
/**Sysemd user slice associated with entry */
systemd_user_slice: string;
/**Systemd invocation ID associated with entry */
systemd_invocation_id: string;
/**Kernel Boot ID associated with entry */
boot_id: string;
/**Machine ID of host associated with entry */
machine_id: string;
/**Hostname associated with entry */
hostname: string;
/**Runtime scope associated with entry */
runtime_scope: string;
/**Trused Timestamp associated with entry in UNIXEPOCH microseconds */
source_realtime: number;
/**Timestamp associated with entry in UNIXEPOCH microseconds */
realtime: number;
/**How entry was received by the Journal service */
transport: string;
/**Journal message entry */
message: string;
/**Message ID associated with Journal Catalog */
message_id: string;
/**Unit result associated with entry */
unit_result: string;
/**Code line for file associated with entry */
code_line: number;
/**Code function for file associated with entry */
code_function: string;
/**Code file associated with entry */
code_file: string;
/**User invocation ID associated with entry */
user_invocation_id: string;
/**User unit associated with entry */
user_unit: string;
/**
* Custom fields associated with entry.
* Example:
* ```
* "custom": {
* "_SOURCE_MONOTONIC_TIMESTAMP": "536995",
* "_UDEV_SYSNAME": "0000:00:1c.3",
* "_KERNEL_DEVICE": "+pci:0000:00:1c.3",
* "_KERNEL_SUBSYSTEM": "pci"
* }
* ```
*/
custom: Record<string, string>;
/**Sequence Number associated with entry */
seqnum: number;
}
SystemInfo
Gets system metadata associated with the endpoint
Other Parsers:
- Any tool that calls the Linux API or queries system information
Refernces:
- N/A
TOML Collection
system = "Linux"
[output]
name = "systeminfo_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "systeminfo"
Collection Options
- N/A
Output Structure
A SystemInfo
object structure
export interface SystemInfo {
/**Boot time for endpoint */
boot_time: number;
/**Endpoint hostname */
hostname: string;
/**Endpoint OS version */
os_version: string;
/**Uptime of endpoint */
uptime: number;
/**Endpoint kernel version */
kernel_version: string;
/**Endpoint platform */
platform: string;
/**CPU information */
cpu: Cpus[];
/**Disks information */
disks: Disks[];
/**Memory information */
memory: Memory;
/**Performance information */
performance: LoadPerformance;
}
/**
* CPU information on endpoint
*/
export interface Cpus {
/**CPU frequency */
frequency: number;
/**CPU usage on endpoint */
cpu_usage: number;
/**Name of CPU */
name: string;
/**Vendor ID for CPU */
vendor_id: string;
/**CPU brand */
brand: string;
/**Core Count */
physical_core_count: number;
}
/**
* Disk information on endpoint
*/
export interface Disks {
/**Type of disk */
disk_type: string;
/**Filesystem for disk */
file_system: string;
/**Disk mount point */
mount_point: string;
/**Disk storage */
total_space: number;
/**Storage remaining */
available_space: number;
/**If disk is removable */
removable: boolean;
}
/**
* Memory information on endpoint
*/
export interface Memory {
/**Available memory on endpoint */
available_memory: number;
/**Free memory on endpoint */
free_memory: number;
/**Free swap on endpoint */
free_swap: number;
/**Total memory on endpoint */
total_memory: number;
/**Total swap on endpoint */
total_swap: number;
/**Memory in use */
used_memory: number;
/**Swap in use */
used_swap: number;
}
/**
* Average CPU load
*/
export interface LoadPerformance {
/**Average load for one (1) min */
avg_one_min: number;
/**Average load for five (5) min */
avg_five_min: number;
/**Average load for fifteen (15) min */
avg_fifteen_min: number;
}
Applications
In addition to supporting OS specific artifacts, artemis
can parse data
associated with several applications.
Chromium
Chromium
is a popular open source web browser created and maintained by
Google. The Chromium
codebase also used for multiple other browsers such as:
- Chrome
- Microsoft Edge
- Opera
- Brave
Artemis
supports parsing browsing history and downloads from Chromium
.
History and downloads data are stored in a SQLITE file.
Other parsers:
- Any program that read a SQLITE database
References:
TOML Collection
system = "macos"
[output]
name = "chromium_macos"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "chromium-history"
[[artifacts]]
artifact_name = "chromium-downloads"
Collection Options
- N/A
Output Structure
An array of ChromiumHistory
for history data and ChromiumDownloads
for
downloads data per user.
export interface ChromiumHistory {
/**Array of history entries */
history: RawChromiumHistory[];
/**Path associated with the history file */
path: string;
/**User associated with the history file */
user: string;
}
/**
* An interface representing the Chromium SQLITE tables: `urls` and `visits`
*/
export interface RawChromiumHistory {
/**Row ID value */
id: number;
/**Page URL */
url: string;
/**Page title */
title: string;
/**Page visit count */
visit_count: number;
/**Typed count value */
typed_count: number;
/**Last visit time in UNIXEPOCH seconds */
last_visit_time: number;
/**Hiden value */
hidden: number;
/**Visits ID value */
visits_id: number;
/**From visit value */
from_visit: number;
/**Transition value */
transition: number;
/**Segment ID value */
segment_id: number;
/**Visit duration value */
visit_duration: number;
/**Opener visit value */
opener_visit: number;
}
export interface ChromiumDownloads {
/**Array of downloads entries */
downloads: RawChromiumDownloads[];
/**Path associated with the downloads file */
path: string;
/**User associated with the downloads file */
user: string;
}
/**
* An interface representing the Chromium SQLITE tables: `downloads` and `downloads_url_chains`
*/
export interface RawChromiumDownloads {
/**Row ID */
id: number;
/**GUID for download */
guid: string;
/**Path to download */
current_path: string;
/**Target path to download */
target_path: string;
/**Download start time in UNIXEPOCH seconds */
start_time: number;
/**Bytes downloaded */
received_bytes: number;
/**Total bytes downloaded */
total_bytes: number;
/**State value */
state: number;
/**Danger type value */
danger_type: number;
/**Interrupt reaason value */
interrupt_reason: number;
/**Raw byte hash value */
hash: number[];
/**Download end time in UNIXEPOCH seconds */
end_time: number;
/**Opened value */
opened: number;
/**Last access time in UNIXEPOCH seconds */
last_access_time: number;
/**Transient value */
transient: number;
/**Referer URL */
referrer: string;
/**Download source URL */
site_url: string;
/**Tabl URL */
tab_url: string;
/**Tab referrer URL */
tab_referrer_url: string;
/**HTTP method used */
http_method: string;
/**By ext ID value */
by_ext_id: string;
/**By ext name value */
by_ext_name: string;
/**Etag value */
etag: string;
/**Last modified time as STRING */
last_modified: string;
/**MIME type value */
mime_type: string;
/**Original mime type value */
original_mime_type: string;
/**Downloads URL chain ID value */
downloads_url_chain_id: number;
/**Chain index value */
chain_index: number;
/**URL for download */
url: string;
}
Firefox
Firefox
is a popular open source web browser created and maintained by
Mozilla. artemis
supports parsing browsing history and downloads from
Firefox
. History and downloads data are stored in a SQLITE file.
Other parsers:
- Any program that read a SQLITE database
References:
TOML Collection
system = "macos"
[output]
name = "firefox_tester"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "firefox-history"
[[artifacts]]
artifact_name = "firefox-downloads"
Collection Options
- N/A
Output Structure
An array of FirefoxHistory
for history data and FirefoxDownloads
for
downloads data per user.
export interface FirefoxHistory {
/**Array of history entries */
history: RawFirefoxHistory[];
/**Path associated with the history file */
path: string;
/**User associated with the history file */
user: string;
}
/**
* An interface representing the Firefox SQLITE tables: `moz_places` and `moz_origins`
*/
export interface RawFirefoxHistory {
/**SQLITE row id */
moz_places_id: number;
/**Page URL */
url: string;
/**Page title */
title: string;
/**URL in reverse */
rev_host: string;
/**Page visit count */
visit_count: number;
/**Hidden value */
hidden: number;
/**Typed value */
typed: number;
/**Frequency value */
frequency: number;
/**Last visit time in UNIXEPOCH seconds */
last_visit_date: number;
/**GUID for entry */
guid: string;
/**Foreign count value */
foreign_count: number;
/**Hash of URL */
url_hash: number;
/**Page description */
description: string;
/**Preview image URL value */
preview_image_url: string;
/**Prefix value (ex: https://) */
prefix: string;
/** Host value */
host: string;
}
export interface FirefoxDownloads {
/**Array of downloads entries */
downloads: RawFirefoxDownloads[];
/**Path associated with the downloads file */
path: string;
/**User associated with the downloads file */
user: string;
}
/**
* An interface representing the Firefox SQLITE tables: `moz_places`, `moz_origins`, `moz_annos`, `moz_anno_attributes`
*/
export interface RawFirefoxDownloads {
/**ID for SQLITE row */
id: number;
/**ID to history entry */
place_id: number;
/**ID to anno_attribute entry */
anno_attribute_id: number;
/**Content value */
content: string;
/**Flags value */
flags: number;
/**Expiration value */
expiration: number;
/**Download type value */
download_type: number;
/**Date added in UNIXEPOCH seconds */
date_added: number;
/**Last modified in UNIXEPOCH seconds */
last_modified: number;
/**Downloaded file name */
name: string;
/**History data associated with downloaded file */
history: RawFirefoxHistory;
}
Safari
Safari
is the builtin web browser an Apple devices. artemis
supports parsing
browsing history and downloads from Safari
. History data is stored in a SQLITE
file while downloads data is stored PLIST file and then stored in
Bookmark format
Other Parsers:
- Any program that read a SQLITE database for History data
References:
TOML Collection
system = "macos"
[output]
name = "safari_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "safari-history"
[[artifacts]]
artifact_name = "safari-downloads"
Collection Options
- N/A
Output Structure
An array of SafariHistory
for history data and SafariDownloads
for downloads
data per user.
export interface SafariHistory {
/**Array of history entries */
history: RawSafariHistory[];
/**Path associated with the history file */
path: string;
/**User associated with the history file */
user: string;
}
/**
* An interface representing the Safari SQLITE tables: `history_items` and `history_visits`
*/
export interface RawSafariHistory {
/**Row ID value */
id: number;
/**Page URL */
url: string;
/**Expansion for domain */
domain_expansion: string;
/**Page visit count */
visit_count: number;
/**Daily visist in raw bytes */
daily_visit_counts: number[];
/**Weekly visist in raw bytes */
weekly_visit_counts: number[];
/**Autocomplete triggers for page */
autocomplete_triggers: number[];
/**Recompute visits count */
should_recompute_derived_visit_counts: number;
/**Visit score value */
visit_count_score: number;
/**Status code value */
status_code: number;
/**Visit time in UNIXEPOCH seconds */
visit_time: number;
/**Load successful value */
load_successful: boolean;
/**Page title */
title: string;
/**Attributes value */
attributes: number;
/**Score value */
score: number;
}
export interface SafariDownloads {
/**Array of downloads entries */
downloads: RawSafariDownloads[];
/**Path associated with the downloads file */
path: string;
/**User associated with the downloads file */
user: string;
}
/**
* An interface representing Safari downloads data
*/
export interface RawSafariDownloads {
/**Source URL for download */
source_url: string;
/**File download path */
download_path: string;
/**Sandbox ID value */
sandbox_id: string;
/**Downloaded bytes */
download_bytes: number;
/**Download ID value */
download_id: string;
/**Download start date in UNIXEPOCH seconds */
download_entry_date: number;
/**Download finish date in UNIXEPOCH seoconds */
download_entry_finish: number;
/**Path to file to run */
path: string[];
/**Path represented as Catalog Node ID */
cnid_path: number[];
/**Created timestamp of target file in UNIXEPOCH seconds */
created: number;
/**Path to the volume of target file */
volume_path: string;
/**Target file URL type */
volume_url: string;
/**Name of volume target file is on */
volume_name: string;
/**Volume UUID */
volume_uuid: string;
/**Size of target volume in bytes */
volume_size: number;
/**Created timestamp of volume in UNIXEPOCH seconds */
volume_created: number;
/**Volume Property flags */
volume_flag: number[];
/**Flag if volume if the root filesystem */
volume_root: boolean;
/**Localized name of target file */
localized_name: string;
/**Read-Write security extension of target file */
security_extension_rw: string;
/**Read-Only security extension of target file */
security_extension_ro: string;
/**File property flags */
target_flags: number[];
/**Username associated with `Bookmark` */
username: string;
/**Folder index number associated with target file */
folder_index: number;
/**UID associated with `LoginItem` */
uid: number;
/**`LoginItem` creation flags */
creation_options: number;
/**Is target file executable */
is_executable: boolean;
/**Does target file have file reference flag */
file_ref_flag: boolean;
}
Scripting with Deno
A really cool capability of artemis
is it contains an embedded JavaScript
runtime via Deno. Deno is V8 based JavaScript
runtime
written in Rust. By importing the deno_core
crate this allows us to create our
own JavaScript
runtime geared specifically for forensics and IR!
For example, the Rust artemis
function get_registry(registry_file_path)
can
be used to parse a provided Registry
file on disk. By registering this
function with the Deno runtime we can call this function directly from
JavaScript
! In addition to JavaScript
,
TypeScript is also supported by Deno!
To summarize:
- We can create a script using
TypeScript
andDeno
- Compile
TypeScript
toJavaScript
- Execute
JavaScript
usingartemis
Deno image from https://deno.land/artwork. The image is MIT licensed. Checkout other artwork by the author at https://hashrock.studio.site/
Prequisites for Scripting.
- Deno
- A text-editor or IDE that supports Deno. VSCodium and VSCode have been tested
- Deno language server extension. The extension in the VSCodium and VSCode marketplaces has been tested.
- A
TypeScript
toJavaScript
bundler. There are multiple options:Deno
includes a builtin bundler however it is schedule for depreciation. (But it still works)- esbuild Deno loader. Will
require a simple build script in order to bundle our
artemis
script
Why TypeScript?
A TypeScript
library is provided instead of JavaScript
due to the enhanced
features and ease of use TypeScript
provides over plain JavaScript
.
Continuing from get_registry(registry_file_path)
example:
export interface Registry {
/**
* Full path to `Registry` key and name.
* Ex: ` ROOT\...\CurrentVersion\Run`
*/
path: string;
/**
* Path to Key
* Ex: ` ROOT\...\CurrentVersion`
*/
key: string;
/**
* Key name
* Ex: `Run`
*/
name: string;
/**
* Values associated with key name
* Ex: `Run => Vmware`. Where Run is the `key` name and `Vmware` is the value name
*/
values: Value[];
/**Timestamp of when the path was last modified */
last_modified: number;
/**Depth of key name */
depth: number;
}
/**
* The value data associated with Registry key
* References:
* https://github.com/libyal/libregf
* https://github.com/msuhanov/regf/blob/master/Windows%20registry%20file%20format%20specification.md
*/
export interface Value {
/**Name of Value */
value: string;
/**
* Data associated with value. Type can be determiend by `data_type`.
* `REG_BINARY` is base64 encoded string
*/
data: string;
/**Value type */
data_type: string;
}
/**
* Function to parse a `Registry` file
* @param path Full path to a `Registry` file
* @returns Array of `Registry` entries
*/
export function get_registry(path: string): Registry[] {
// Array of JSON objects
const data = Deno.core.ops.get_registry(path);
const reg_array: Registry[] = JSON.parse(data);
return reg_array;
}
The above TypeScript
code shows that we can access our registered
get_registry
function by calling it via Deno.core.ops.get_registry(path);
To make scripting even easier a simple artemis-api
library is available to
import into Deno scripts. This allows users to create scripts without needing to
know what functions are registered.
The example script below shows TypeScript
code that imports the artemis-api
library to parse the SOFTWARE
Registry
file to get a list of installed
programs
import { getRegistry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/mod.ts";
import { Registry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/src/windows/registry.ts";
interface InstalledPrograms {
name: string;
version: string;
install_location: string;
install_source: string;
language: string;
publisher: string;
install_string: string;
install_date: string;
uninstall_string: string;
url_info: string;
reg_path: string;
}
function grab_info(reg: Registry[]): InstalledPrograms[] {
const programs: InstalledPrograms[] = [];
const min_size = 3;
for (const entries of reg) {
if (entries.values.length < min_size) {
continue;
}
const program: InstalledPrograms = {
name: "",
version: "",
install_location: "",
install_source: "",
language: "",
publisher: "",
install_string: "",
install_date: "",
uninstall_string: "",
url_info: "",
reg_path: entries.path,
};
for (const value of entries.values) {
switch (value.value) {
case "DisplayName":
program.name = value.data;
break;
case "DisplayVersion":
program.version = value.data;
break;
case "InstallDate":
program.install_date = value.data;
break;
case "InstallLocation":
program.install_location = value.data;
break;
case "InstallSource":
program.install_source = value.data;
break;
case "Language":
program.language = value.data;
break;
case "Publisher":
program.publisher = value.data;
break;
case "UninstallString":
program.uninstall_string = value.data;
break;
case "URLInfoAbout":
program.url_info = value.data;
break;
default:
continue;
}
}
programs.push(program);
}
return programs;
}
function main() {
const path = "C:\\Windows\\System32\\config\\SOFTWARE";
const reg = getRegistry(path);
const programs: Registry[] = [];
for (const entries of reg) {
if (
!entries.path.includes(
"Microsoft\\Windows\\CurrentVersion\\Uninstall",
)
) {
continue;
}
programs.push(entries);
}
return grab_info(programs);
}
main();
We can then compile and bundle this TypeScript
code to JavaScript
and
execute using artemis
!
Bundling
Currently artemis
requires that we have all of our JavaScript
code in one
(1) .js
file. However, while very simple scripts may only be one (1) file, if
we decide to import an artemis
function, or split our code into multiple files
we now have multiple files that need to be combine into one (1) .js
file.
A Bundler can help us perform this task.
The TypeScript
code below imports a function and the Registry
interface from
artemis
.
import { getRegistry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/mod.ts";
import { Registry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/src/windows/registry.ts";
interface InstalledPrograms {
name: string;
version: string;
install_location: string;
install_source: string;
language: string;
publisher: string;
install_string: string;
install_date: string;
uninstall_string: string;
url_info: string;
reg_path: string;
}
function grab_info(reg: Registry[]): InstalledPrograms[] {
const programs: InstalledPrograms[] = [];
const min_size = 3;
for (const entries of reg) {
if (entries.values.length < min_size) {
continue;
}
const program: InstalledPrograms = {
name: "",
version: "",
install_location: "",
install_source: "",
language: "",
publisher: "",
install_string: "",
install_date: "",
uninstall_string: "",
url_info: "",
reg_path: entries.path,
};
for (const value of entries.values) {
switch (value.value) {
case "DisplayName":
program.name = value.data;
break;
case "DisplayVersion":
program.version = value.data;
break;
case "InstallDate":
program.install_date = value.data;
break;
case "InstallLocation":
program.install_location = value.data;
break;
case "InstallSource":
program.install_source = value.data;
break;
case "Language":
program.language = value.data;
break;
case "Publisher":
program.publisher = value.data;
break;
case "UninstallString":
program.uninstall_string = value.data;
break;
case "URLInfoAbout":
program.url_info = value.data;
break;
default:
continue;
}
}
programs.push(program);
}
return programs;
}
function main() {
const path = "C:\\Windows\\System32\\config\\SOFTWARE";
const reg = getRegistry(path);
const programs: Registry[] = [];
for (const entries of reg) {
if (
!entries.path.includes(
"Microsoft\\Windows\\CurrentVersion\\Uninstall",
)
) {
continue;
}
programs.push(entries);
}
return grab_info(programs);
}
main();
Lets save this code to the file main.ts
. Before we can compile the code to
JavaScript
we have to include (bundle) mod.ts
and registry.ts
.
There are multiple types of bundler applications that can help us with this
task. Two (2) this book will focus on are:
- The builtin bundler in Deno
- esbuild Deno loader
Deno Builtin Bundler
To bundle our main.ts
and compile to a .js
file. We just need to run:
deno bundle --no-check main.ts > main.js
. By default Deno
wil output to the
console when bundling.
--no-check
Flag
This flag tells Deno
not to type check values. This flag is required due to:
Deno.core.ops.get_registry(path)
The Deno binary is designed to support code written for the Deno platform.
However, we are using a custom Deno runtime.
The Deno binary has no idea what get_registry
is because it is a custom
function we have registered in our own runtime.
Output
// deno-fmt-ignore-file
// deno-lint-ignore-file
// This code was bundled using `deno bundle` and it's not recommended to edit it manually
function get_registry(path) {
const data = Deno.core.ops.get_registry(path);
const reg_array = JSON.parse(data);
return reg_array;
}
function getRegistry(path) {
return get_registry(path);
}
function grab_info(reg) {
const programs = [];
for (const entries of reg){
if (entries.values.length < 3) {
continue;
}
const program = {
name: "",
version: "",
install_location: "",
install_source: "",
language: "",
publisher: "",
install_string: "",
install_date: "",
uninstall_string: "",
url_info: "",
reg_path: entries.path
};
for (const value of entries.values){
switch(value.value){
case "DisplayName":
program.name = value.data;
break;
case "DisplayVersion":
program.version = value.data;
break;
case "InstallDate":
program.install_date = value.data;
break;
case "InstallLocation":
program.install_location = value.data;
break;
case "InstallSource":
program.install_source = value.data;
break;
case "Language":
program.language = value.data;
break;
case "Publisher":
program.publisher = value.data;
break;
case "UninstallString":
program.uninstall_string = value.data;
break;
case "URLInfoAbout":
program.url_info = value.data;
break;
default:
continue;
}
}
programs.push(program);
}
return programs;
}
function main() {
const path = "C:\\Windows\\System32\\config\\SOFTWARE";
const reg = getRegistry(path);
const programs = [];
for (const entries of reg){
if (!entries.path.includes("Microsoft\\Windows\\CurrentVersion\\Uninstall")) {
continue;
}
programs.push(entries);
}
return grab_info(programs);
}
main();
The JavaScript
code above was generated with the deno bundle
command and is
now ready to be executed by artemis
!
Esbuild
esbuild is a popular Bundler for JavaScript
. It
is normally run as a standalone binary, however we can import a module that lets
us dynamically execute esbuild using Deno. In order to do this we need a build
script. Using the same main.ts
file above, create a build.ts
file in the
same directory. Add the following code to build.ts
:
import * as esbuild from "https://deno.land/x/esbuild@v0.15.10/mod.js";
import { denoPlugin } from "https://deno.land/x/esbuild_deno_loader@0.6.0/mod.ts";
async function main() {
const _result = await esbuild.build({
plugins: [denoPlugin()],
entryPoints: ["./main.ts"],
outfile: "main.js",
bundle: true,
format: "cjs",
});
esbuild.stop();
}
main();
The above script will use the main.ts
file and bundle all of its pre-requisite
files into one .js
file using esbuild. We then execute this code using
deno run build.ts
Output
// https://raw.githubusercontent.com/puffycid/artemis-api/master/src/windows/registry.ts
function get_registry(path) {
const data = Deno.core.ops.get_registry(path);
const reg_array = JSON.parse(data);
return reg_array;
}
// https://raw.githubusercontent.com/puffycid/artemis-api/master/mod.ts
function getRegistry(path) {
return get_registry(path);
}
// main.ts
function grab_info(reg) {
const programs = [];
const min_size = 3;
for (const entries of reg) {
if (entries.values.length < min_size) {
continue;
}
const program = {
name: "",
version: "",
install_location: "",
install_source: "",
language: "",
publisher: "",
install_string: "",
install_date: "",
uninstall_string: "",
url_info: "",
reg_path: entries.path,
};
for (const value of entries.values) {
switch (value.value) {
case "DisplayName":
program.name = value.data;
break;
case "DisplayVersion":
program.version = value.data;
break;
case "InstallDate":
program.install_date = value.data;
break;
case "InstallLocation":
program.install_location = value.data;
break;
case "InstallSource":
program.install_source = value.data;
break;
case "Language":
program.language = value.data;
break;
case "Publisher":
program.publisher = value.data;
break;
case "UninstallString":
program.uninstall_string = value.data;
break;
case "URLInfoAbout":
program.url_info = value.data;
break;
default:
continue;
}
}
programs.push(program);
}
return programs;
}
function main() {
const path = "C:\\Windows\\System32\\config\\SOFTWARE";
const reg = getRegistry(path);
const programs = [];
for (const entries of reg) {
if (
!entries.path.includes(
"Microsoft\\Windows\\CurrentVersion\\Uninstall",
)
) {
continue;
}
programs.push(entries);
}
return grab_info(programs);
}
main();
The JavaScript
code above was generated by esbuild via Deno
and is now ready
to be executed by artemis
!
Scripts
The easiest way to start scripting is to create a Deno project.
deno init <project name>
will create a project in the current directory
The deno website contains the full documentation on a Deno project layout. By default the following files are created for a new project:
deno.jsonc
main.ts
main_bench.ts
main_test.ts
Since we are using a runtime built specifically for forensics and IR none of the
builtin Deno functions are available. All scripts must import the
artemis-api modules in order to
effectively create scripts. In addition, to artemis-api
only the vanilla
JavaScript
API is available for scripting
To import artemis
functions into your script, open main.ts
and import the
function associated with the artifact you want to parse. For example, to parse
the Windows Registry
you would import:
import { getRegistry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/mod.ts";
If you wanted to parse the Windows Registry
and manipulate the parsed data you
would import:
import { getRegistry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/mod.ts";
import { Registry } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/src/windows/registry.ts";
A list of all exported artemis
functions can be found at
https://github.com/puffyCid/artemis-api
. All artifacts supported by artemis
are callable from TypeScript
. The structured output produced by each
artifact
is listed in the respective artifact
chapter. For example, the
structured Registry
data format return getRegistry
is found in the
Registry chapter
Once we have created and bundled our script. We just need to base64 encode
before providing it to artemis
.
TOML Collection
An example TOML collection would like this
system = "macos"
[output]
name = "plist_data"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "all_users_plist_files"
# Parses all plist files in /Users/%
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvcGxpc3QudHMKZnVuY3Rpb24gZ2V0UGxpc3QocGF0aCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9wbGlzdChwYXRoKTsKICBpZiAoZGF0YSA9PT0gIiIpIHsKICAgIHJldHVybiBudWxsOwogIH0KICBjb25zdCBwbGlzdF9kYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICByZXR1cm4gcGxpc3RfZGF0YTsKfQoKLy8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvc3lzdGVtL291dHB1dC50cwpmdW5jdGlvbiBvdXRwdXRSZXN1bHRzKGRhdGEsIGRhdGFfbmFtZSwgb3V0cHV0KSB7CiAgY29uc3Qgb3V0cHV0X3N0cmluZyA9IEpTT04uc3RyaW5naWZ5KG91dHB1dCk7CiAgY29uc3Qgc3RhdHVzID0gRGVuby5jb3JlLm9wcy5vdXRwdXRfcmVzdWx0cygKICAgIGRhdGEsCiAgICBkYXRhX25hbWUsCiAgICBvdXRwdXRfc3RyaW5nCiAgKTsKICByZXR1cm4gc3RhdHVzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9maWxlc3lzdGVtL2RpcmVjdG9yeS50cwphc3luYyBmdW5jdGlvbiByZWFkRGlyKHBhdGgpIHsKICBjb25zdCBkYXRhID0gSlNPTi5wYXJzZShhd2FpdCBmcy5yZWFkRGlyKHBhdGgpKTsKICByZXR1cm4gZGF0YTsKfQoKLy8gbWFpbi50cwphc3luYyBmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IHN0YXJ0X3BhdGggPSAiL1VzZXJzIjsKICBjb25zdCBwbGlzdF9maWxlcyA9IFtdOwogIGF3YWl0IHJlY3Vyc2VfZGlyKHBsaXN0X2ZpbGVzLCBzdGFydF9wYXRoKTsKICByZXR1cm4gcGxpc3RfZmlsZXM7Cn0KYXN5bmMgZnVuY3Rpb24gcmVjdXJzZV9kaXIocGxpc3RfZmlsZXMsIHN0YXJ0X3BhdGgpIHsKICBpZiAocGxpc3RfZmlsZXMubGVuZ3RoID4gMjApIHsKICAgIGNvbnN0IG91dCA9IHsKICAgICAgbmFtZTogImFydGVtaXNfcGxpc3QiLAogICAgICBkaXJlY3Rvcnk6ICIuL3RtcCIsCiAgICAgIGZvcm1hdDogImpzb24iIC8qIEpTT04gKi8sCiAgICAgIGNvbXByZXNzOiBmYWxzZSwKICAgICAgZW5kcG9pbnRfaWQ6ICJhbnl0aGluZy1pLXdhbnQiLAogICAgICBjb2xsZWN0aW9uX2lkOiAxLAogICAgICBvdXRwdXQ6ICJsb2NhbCIgLyogTE9DQUwgKi8KICAgIH07CiAgICBjb25zdCBzdGF0dXMgPSBvdXRwdXRSZXN1bHRzKAogICAgICBKU09OLnN0cmluZ2lmeShwbGlzdF9maWxlcyksCiAgICAgICJhcnRlbWlzX2luZm8iLAogICAgICBvdXQKICAgICk7CiAgICBpZiAoIXN0YXR1cykgewogICAgICBjb25zb2xlLmxvZygiQ291bGQgbm90IG91dHB1dCB0byBsb2NhbCBkaXJlY3RvcnkiKTsKICAgIH0KICAgIHBsaXN0X2ZpbGVzID0gW107CiAgfQogIGZvciAoY29uc3QgZW50cnkgb2YgYXdhaXQgcmVhZERpcihzdGFydF9wYXRoKSkgewogICAgY29uc3QgcGxpc3RfcGF0aCA9IGAke3N0YXJ0X3BhdGh9LyR7ZW50cnkuZmlsZW5hbWV9YDsKICAgIGlmIChlbnRyeS5pc19maWxlICYmIGVudHJ5LmZpbGVuYW1lLmVuZHNXaXRoKCJwbGlzdCIpKSB7CiAgICAgIGNvbnN0IGRhdGEgPSBnZXRQbGlzdChwbGlzdF9wYXRoKTsKICAgICAgaWYgKGRhdGEgPT09IG51bGwpIHsKICAgICAgICBjb250aW51ZTsKICAgICAgfQogICAgICBjb25zdCBwbGlzdF9pbmZvID0gewogICAgICAgIHBsaXN0X2NvbnRlbnQ6IGRhdGEsCiAgICAgICAgZmlsZTogcGxpc3RfcGF0aAogICAgICB9OwogICAgICBwbGlzdF9maWxlcy5wdXNoKHBsaXN0X2luZm8pOwogICAgICBjb250aW51ZTsKICAgIH0KICAgIGlmIChlbnRyeS5pc19kaXJlY3RvcnkpIHsKICAgICAgYXdhaXQgcmVjdXJzZV9kaXIocGxpc3RfZmlsZXMsIHBsaXN0X3BhdGgpOwogICAgfQogIH0KfQptYWluKCk7Cg=="
Collection Options
name
Name for scriptscript
Base64 encoded bundled script (JavaScript)
Filter Scripts
In addition to creating scripts that call artemis
functions. artemis
has the
ability to pass the artifact data as an argument to a script! For most scenarios
calling the artemis
function is the recommended practice for scripting.
However, the sole execption is the filelisting
and rawfilelisting
artifacts.
When pulling a filelisting artemis
will recursively walk the filesystem, but
in order to keep memory usage low, every 100,000 files artemis
will output the
results. While this will keep memory usage low, it makes it difficult to use via
scripting. If we return 100,000 entries to our script, we cannot continue our
recursive filelisting because we have lost track where we are in the filesystem.
This where filter scripts can help.
Instead of calling an artemis
function like getRegistry
we instead tell
artemis
to pass the artifact data as an argument to our script. So, instead of
returning 100,000 files, we pass that data as an argument to our script before
outputting the results.
A normal artemis
script would look like something below:
system = "macos"
[output]
name = "plist_data"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "all_users_plist_files"
# Parses all plist files in /Users/%
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvcGxpc3QudHMKZnVuY3Rpb24gZ2V0UGxpc3QocGF0aCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9wbGlzdChwYXRoKTsKICBpZiAoZGF0YSA9PT0gIiIpIHsKICAgIHJldHVybiBudWxsOwogIH0KICBjb25zdCBwbGlzdF9kYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICByZXR1cm4gcGxpc3RfZGF0YTsKfQoKLy8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvc3lzdGVtL291dHB1dC50cwpmdW5jdGlvbiBvdXRwdXRSZXN1bHRzKGRhdGEsIGRhdGFfbmFtZSwgb3V0cHV0KSB7CiAgY29uc3Qgb3V0cHV0X3N0cmluZyA9IEpTT04uc3RyaW5naWZ5KG91dHB1dCk7CiAgY29uc3Qgc3RhdHVzID0gRGVuby5jb3JlLm9wcy5vdXRwdXRfcmVzdWx0cygKICAgIGRhdGEsCiAgICBkYXRhX25hbWUsCiAgICBvdXRwdXRfc3RyaW5nCiAgKTsKICByZXR1cm4gc3RhdHVzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9maWxlc3lzdGVtL2RpcmVjdG9yeS50cwphc3luYyBmdW5jdGlvbiByZWFkRGlyKHBhdGgpIHsKICBjb25zdCBkYXRhID0gSlNPTi5wYXJzZShhd2FpdCBmcy5yZWFkRGlyKHBhdGgpKTsKICByZXR1cm4gZGF0YTsKfQoKLy8gbWFpbi50cwphc3luYyBmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IHN0YXJ0X3BhdGggPSAiL1VzZXJzIjsKICBjb25zdCBwbGlzdF9maWxlcyA9IFtdOwogIGF3YWl0IHJlY3Vyc2VfZGlyKHBsaXN0X2ZpbGVzLCBzdGFydF9wYXRoKTsKICByZXR1cm4gcGxpc3RfZmlsZXM7Cn0KYXN5bmMgZnVuY3Rpb24gcmVjdXJzZV9kaXIocGxpc3RfZmlsZXMsIHN0YXJ0X3BhdGgpIHsKICBpZiAocGxpc3RfZmlsZXMubGVuZ3RoID4gMjApIHsKICAgIGNvbnN0IG91dCA9IHsKICAgICAgbmFtZTogImFydGVtaXNfcGxpc3QiLAogICAgICBkaXJlY3Rvcnk6ICIuL3RtcCIsCiAgICAgIGZvcm1hdDogImpzb24iIC8qIEpTT04gKi8sCiAgICAgIGNvbXByZXNzOiBmYWxzZSwKICAgICAgZW5kcG9pbnRfaWQ6ICJhbnl0aGluZy1pLXdhbnQiLAogICAgICBjb2xsZWN0aW9uX2lkOiAxLAogICAgICBvdXRwdXQ6ICJsb2NhbCIgLyogTE9DQUwgKi8KICAgIH07CiAgICBjb25zdCBzdGF0dXMgPSBvdXRwdXRSZXN1bHRzKAogICAgICBKU09OLnN0cmluZ2lmeShwbGlzdF9maWxlcyksCiAgICAgICJhcnRlbWlzX2luZm8iLAogICAgICBvdXQKICAgICk7CiAgICBpZiAoIXN0YXR1cykgewogICAgICBjb25zb2xlLmxvZygiQ291bGQgbm90IG91dHB1dCB0byBsb2NhbCBkaXJlY3RvcnkiKTsKICAgIH0KICAgIHBsaXN0X2ZpbGVzID0gW107CiAgfQogIGZvciAoY29uc3QgZW50cnkgb2YgYXdhaXQgcmVhZERpcihzdGFydF9wYXRoKSkgewogICAgY29uc3QgcGxpc3RfcGF0aCA9IGAke3N0YXJ0X3BhdGh9LyR7ZW50cnkuZmlsZW5hbWV9YDsKICAgIGlmIChlbnRyeS5pc19maWxlICYmIGVudHJ5LmZpbGVuYW1lLmVuZHNXaXRoKCJwbGlzdCIpKSB7CiAgICAgIGNvbnN0IGRhdGEgPSBnZXRQbGlzdChwbGlzdF9wYXRoKTsKICAgICAgaWYgKGRhdGEgPT09IG51bGwpIHsKICAgICAgICBjb250aW51ZTsKICAgICAgfQogICAgICBjb25zdCBwbGlzdF9pbmZvID0gewogICAgICAgIHBsaXN0X2NvbnRlbnQ6IGRhdGEsCiAgICAgICAgZmlsZTogcGxpc3RfcGF0aAogICAgICB9OwogICAgICBwbGlzdF9maWxlcy5wdXNoKHBsaXN0X2luZm8pOwogICAgICBjb250aW51ZTsKICAgIH0KICAgIGlmIChlbnRyeS5pc19kaXJlY3RvcnkpIHsKICAgICAgYXdhaXQgcmVjdXJzZV9kaXIocGxpc3RfZmlsZXMsIHBsaXN0X3BhdGgpOwogICAgfQogIH0KfQptYWluKCk7Cg=="
High level overview of what happens:
TOML file -> decode script -> artemis executes script -> output data
A filter script would look like something below:
system = "macos"
[output]
name = "info_plist_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
filter_name = "apps_info_plists"
# This script will take the files artifact below and filter it to only return Info.plist files
# We could expand this even further by then using the plist parser on the Info.plist path and include that parsed data too
filter_script = "Ly8gbWFpbi50cwpmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IGFyZ3MgPSBTVEFUSUNfQVJHUzsKICBpZiAoYXJncy5sZW5ndGggPT09IDApIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgZGF0YSA9IEpTT04ucGFyc2UoYXJnc1swXSk7CiAgY29uc3QgZmlsdGVyX2ZpbGVzID0gW107CiAgZm9yIChjb25zdCBlbnRyeSBvZiBkYXRhKSB7CiAgICBpZiAoZW50cnkuZmlsZW5hbWUgPT0gIkluZm8ucGxpc3QiKSB7CiAgICAgIGZpbHRlcl9maWxlcy5wdXNoKGVudHJ5KTsKICAgIH0KICB9CiAgcmV0dXJuIGZpbHRlcl9maWxlczsKfQptYWluKCk7Cg=="
[[artifacts]]
artifact_name = "files" # Name of artifact
filter = true
[artifacts.files]
start_path = "/System/Volumes/Data/Applications" # Start of file listing
depth = 100 # How many sub directories to descend
metadata = false # Get executable metadata
md5 = false # MD5 all files
sha1 = false # SHA1 all files
sha256 = false # SHA256 all files
path_regex = "" # Regex for paths
file_regex = "" # Regex for files
The biggest differences are:
- We use a
[[artifacts]]
list to parse our data - We base64 encode our script and assign to
filter_script
to tellartemis
: take the results of the[[artifacts]]
list and filter them before outputting the data - We then set the
filter
value totrue
High level overview of what happens:
TOML file -> walkthrough artifacts list -> artemis collects data -> pass data to filter script -> output data
All entries in a [[artifacts]]
list can be sent through a filter script with
the exception of regular artemis
scripts. The output of these scripts do not
go through filter_script
.
The TypeScript
code for a filter script would be something like below:
import { MacosFileInfo } from "https://raw.githubusercontent.com/puffycid/artemis-api/master/src/macos/files.ts";
/**
* Filters a provided file listing argument to only return Info.plist files from /Applications
* Two arguments are always provided:
* - First is the parsed data serialized into JSON string
* - Second is the artifact name (ex: "amcache")
* @returns Array of files only containing Info.plist
*/
function main() {
// Since this is a filter script our data will be passed as a Serde Value that is a string
const args = Deno.args;
if (args.length === 0) {
return [];
}
// Parse the provide Serde Value (JSON string) as a MacosFileInfo[]
const data: MacosFileInfo[] = JSON.parse(args[0]);
const filter_files: MacosFileInfo[] = [];
for (const entry of data) {
if (entry.filename == "Info.plist") {
filter_files.push(entry);
}
}
return filter_files;
}
main();
The key difference between a regular artemis
script and a filter script is:
const args = Deno.args;
if (args.length === 0) {
return [];
}
// Parse the provide Serde Value (JSON string) as a MacosFileInfo[]
const data: MacosFileInfo[] = JSON.parse(args[0]);
Here we are taking the first argument provided to our script and parsing it as a
JSON MacosFileInfo
object array. As stated above, artemis
will pass the
results of each [[artifacts]]
entry to our script using serde to serialize the
data as a JSON formattted string.
We then parse and filter the data based on our script
// Parse the provide Serde Value (JSON string) as a MacosFileInfo[]
const data: MacosFileInfo[] = JSON.parse(args[0]);
const filter_files: MacosFileInfo[] = [];
for (const entry of data) {
if (entry.filename == "Info.plist") {
filter_files.push(entry);
}
}
Finally, we take our filtered output and return it back to artemis
return filter_files;
So our initial data provided to our filter script gets filtered and returned. In
this example, our 100,000 file listing entry gets filtered to only return
entries with the filename Info.plist
.
Limitations
It is important to understand the JavaScript runtime for artemis
is not
like normal JavaScript runtimes like nodejs
, deno
, bun
, etc. These
runtimes are primarily designed to create web apps.
Therefore tutorials or example scripts created for other runtimes will likely
not work with artemis
. For example, the JavaScript function console.table()
does not exist in artemis
. However, the functions console.log()
and
console.error()
do exist in artemis
.
There JavaScript runtime for artemis
is designed specifically to assist with
scripting for IR and forensic investigations.
There are currently some additional limitations to scripting with artemis
.
- All scripts executed through
artemis
must be inJavaScript
. You cannot executeTypeScript
scripts directly. You must compile and bundle them into one (1)JavaScript
file. - The
JavaScript
must be in common JS format (cjs). EMCAScript (ES) module scripts are not supported. The example code below usesesbuild
to bundle themain.ts
file toJavaScript
using CJS format viadeno run build.ts
:
import * as esbuild from "https://deno.land/x/esbuild@v0.15.10/mod.js";
import { denoPlugin } from "https://deno.land/x/esbuild_deno_loader@0.6.0/mod.ts";
async function main() {
const _result = await esbuild.build({
plugins: [denoPlugin()],
entryPoints: ["./main.ts"],
outfile: "main.js",
bundle: true,
format: "cjs",
});
esbuild.stop();
}
main();
Library Usage
artemis-core
is a very simple Rust library. It currently only exposes two (2)
functions:
parse_toml_file(path: &str)
- Parse a TOML collection file at provided pathparse_toml_data(data: &[u8])
- Parse bytes associated with a TOML collection
Both functions will return nothing on success (artemis-core
handles data
output) or an error.
Logging
artemis-core
includes a logging feature that tracks internal issues it may
encounter when executing. If you import artemis-core
into your own project you
may register you own logger, however that will then disable the builtin logger
in artemis-core
.
Execution
Windows TOML collection to parse artifacts commonly associated with execution
system = "windows"
[output]
name = "execution_collection"
directory = "./tmp"
format = "json"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "prefetch"
[artifacts.prefetch]
[[artifacts]]
artifact_name = "amcache"
[artifacts.amcache]
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
[[artifacts]]
artifact_name = "userassist"
[artifacts.userassist]
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "muicache"
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9yZWdpc3RyeS50cwpmdW5jdGlvbiBnZXRSZWdpc3RyeShwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X3JlZ2lzdHJ5KHBhdGgpOwogIGNvbnN0IHJlc3VsdHMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiByZXN1bHRzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIGh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9wdWZmeWNpZC9hcnRlbWlzLWFwaS9tYXN0ZXIvc3JjL2ZpbGVzeXN0ZW0vZGlyZWN0b3J5LnRzCmFzeW5jIGZ1bmN0aW9uIHJlYWREaXIocGF0aCkgewogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKGF3YWl0IGZzLnJlYWREaXIocGF0aCkpOwogIHJldHVybiBkYXRhOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9maWxlc3lzdGVtL2ZpbGVzLnRzCmZ1bmN0aW9uIHN0YXQocGF0aCkgewogIGNvbnN0IGRhdGEgPSBmcy5zdGF0KHBhdGgpOwogIGNvbnN0IHZhbHVlID0gSlNPTi5wYXJzZShkYXRhKTsKICByZXR1cm4gdmFsdWU7Cn0KCi8vIG1haW4udHMKYXN5bmMgZnVuY3Rpb24gbWFpbigpIHsKICBjb25zdCBkcml2ZSA9IGdldEVudlZhbHVlKCJTeXN0ZW1Ecml2ZSIpOwogIGlmIChkcml2ZSA9PT0gIiIpIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgbXVpX2FycmF5ID0gW107CiAgY29uc3QgdXNlcnMgPSBgJHtkcml2ZX1cXFVzZXJzYDsKICBmb3IgKGNvbnN0IGVudHJ5IG9mIGF3YWl0IHJlYWREaXIodXNlcnMpKSB7CiAgICB0cnkgewogICAgICBjb25zdCBwYXRoID0gYCR7dXNlcnN9XFwke2VudHJ5LmZpbGVuYW1lfVxcQXBwRGF0YVxcTG9jYWxcXE1pY3Jvc29mdFxcV2luZG93c1xcVXNyQ2xhc3MuZGF0YDsKICAgICAgY29uc3Qgc3RhdHVzID0gc3RhdChwYXRoKTsKICAgICAgaWYgKCFzdGF0dXMuaXNfZmlsZSkgewogICAgICAgIGNvbnRpbnVlOwogICAgICB9CiAgICAgIGNvbnN0IHJlZ19yZXN1bHRzID0gZ2V0UmVnaXN0cnkocGF0aCk7CiAgICAgIGZvciAoY29uc3QgcmVnX2VudHJ5IG9mIHJlZ19yZXN1bHRzKSB7CiAgICAgICAgaWYgKHJlZ19lbnRyeS5wYXRoLmluY2x1ZGVzKAogICAgICAgICAgIkxvY2FsIFNldHRpbmdzXFxTb2Z0d2FyZVxcTWljcm9zb2Z0XFxXaW5kb3dzXFxTaGVsbFxcTXVpQ2FjaGUiCiAgICAgICAgKSkgewogICAgICAgICAgZm9yIChjb25zdCB2YWx1ZSBvZiByZWdfZW50cnkudmFsdWVzKSB7CiAgICAgICAgICAgIGlmICh2YWx1ZS5kYXRhX3R5cGUgIT0gIlJFR19TWiIpIHsKICAgICAgICAgICAgICBjb250aW51ZTsKICAgICAgICAgICAgfQogICAgICAgICAgICBjb25zdCBtdWljYWNoZSA9IHsKICAgICAgICAgICAgICBhcHBsaWNhdGlvbjogdmFsdWUudmFsdWUsCiAgICAgICAgICAgICAgZGVzY3JpcHRpb246IHZhbHVlLmRhdGEKICAgICAgICAgICAgfTsKICAgICAgICAgICAgbXVpX2FycmF5LnB1c2gobXVpY2FjaGUpOwogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfQogICAgfSBjYXRjaCAoX2UpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgfQogIHJldHVybiBtdWlfYXJyYXk7Cn0KbWFpbigpOwo="
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "eventlogs_4688"
script = "Ly8gLi4vLi4vYXJ0ZW1pcy1hcGkvc3JjL3dpbmRvd3MvZXZlbnRsb2dzLnRzCmZ1bmN0aW9uIGdldF9ldmVudGxvZ3MocGF0aCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9ldmVudGxvZ3MocGF0aCk7CiAgY29uc3QgbG9nX2FycmF5ID0gSlNPTi5wYXJzZShkYXRhKTsKICByZXR1cm4gbG9nX2FycmF5Owp9CgovLyAuLi8uLi9hcnRlbWlzLWFwaS9tb2QudHMKZnVuY3Rpb24gZ2V0RXZlbnRMb2dzKHBhdGgpIHsKICByZXR1cm4gZ2V0X2V2ZW50bG9ncyhwYXRoKTsKfQoKLy8gbWFpbi50cwpmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IHBhdGggPSAiQzpcXFdpbmRvd3NcXFN5c3RlbTMyXFx3aW5ldnRcXExvZ3NcXFNlY3VyaXR5LmV2dHgiOwogIGNvbnN0IHJlY29yZHMgPSBnZXRFdmVudExvZ3MocGF0aCk7CiAgY29uc3QgcHJvY2Vzc2VzID0gW107CiAgZm9yIChjb25zdCByZWNvcmQgb2YgcmVjb3JkcykgewogICAgaWYgKHJlY29yZC5kYXRhWyJFdmVudCJdWyJTeXN0ZW0iXVsiRXZlbnRJRCJdICE9IDQ2ODgpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBwcm9jZXNzZXMucHVzaChyZWNvcmQpOwogIH0KICByZXR1cm4gcHJvY2Vzc2VzOwp9Cm1haW4oKTsK"
Triage
Windows TOML collection focusing on quickly collecting data related to a Windows alert.
system = "windows"
[output]
name = "triage_collection"
directory = "./tmp"
format = "json"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "processes"
[artifacts.processes]
md5 = true
sha1 = false
sha256 = false
metadata = true
[[artifacts]]
artifact_name = "prefetch"
[artifacts.prefetch]
[[artifacts]]
artifact_name = "userassist"
[artifacts.userassist]
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "office_mru"
# Pulls back recently opened Office documents for all users from NTUSER.DAT files
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9yZWdpc3RyeS50cwpmdW5jdGlvbiBnZXRSZWdpc3RyeShwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X3JlZ2lzdHJ5KHBhdGgpOwogIGNvbnN0IHJlc3VsdHMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiByZXN1bHRzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIGh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9wdWZmeWNpZC9hcnRlbWlzLWFwaS9tYXN0ZXIvc3JjL2ZpbGVzeXN0ZW0vZGlyZWN0b3J5LnRzCmFzeW5jIGZ1bmN0aW9uIHJlYWREaXIocGF0aCkgewogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKGF3YWl0IGZzLnJlYWREaXIocGF0aCkpOwogIHJldHVybiBkYXRhOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9maWxlc3lzdGVtL2ZpbGVzLnRzCmZ1bmN0aW9uIHN0YXQocGF0aCkgewogIGNvbnN0IGRhdGEgPSBmcy5zdGF0KHBhdGgpOwogIGNvbnN0IHZhbHVlID0gSlNPTi5wYXJzZShkYXRhKTsKICByZXR1cm4gdmFsdWU7Cn0KCi8vIG1haW4udHMKYXN5bmMgZnVuY3Rpb24gbWFpbigpIHsKICBjb25zdCBkcml2ZSA9IGdldEVudlZhbHVlKCJTeXN0ZW1Ecml2ZSIpOwogIGlmIChkcml2ZSA9PT0gIiIpIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3Qgb2ZmaWNlX2FycmF5ID0gW107CiAgY29uc3QgdXNlcnMgPSBgJHtkcml2ZX1cXFVzZXJzYDsKICBmb3IgKGNvbnN0IGVudHJ5IG9mIGF3YWl0IHJlYWREaXIodXNlcnMpKSB7CiAgICB0cnkgewogICAgICBjb25zdCBwYXRoID0gYCR7dXNlcnN9XFwke2VudHJ5LmZpbGVuYW1lfVxcTlRVU0VSLkRBVGA7CiAgICAgIGNvbnN0IHN0YXR1cyA9IHN0YXQocGF0aCk7CiAgICAgIGlmICghc3RhdHVzLmlzX2ZpbGUpIHsKICAgICAgICBjb250aW51ZTsKICAgICAgfQogICAgICBjb25zdCByZWdfcmVzdWx0cyA9IGdldFJlZ2lzdHJ5KHBhdGgpOwogICAgICBmb3IgKGNvbnN0IHJlZ19lbnRyeSBvZiByZWdfcmVzdWx0cykgewogICAgICAgIGlmICghcmVnX2VudHJ5LnBhdGgubWF0Y2goCiAgICAgICAgICAvTWljcm9zb2Z0XFxPZmZpY2VcXDEoNHw1fDYpXC4wXFwuKlxcKEZpbGUgTVJVfCBVc2VyIE1SVVxcLipcXEZpbGUgTVJVKS8KICAgICAgICApKSB7CiAgICAgICAgICBjb250aW51ZTsKICAgICAgICB9CiAgICAgICAgZm9yIChjb25zdCB2YWx1ZSBvZiByZWdfZW50cnkudmFsdWVzKSB7CiAgICAgICAgICBpZiAoIXZhbHVlLnZhbHVlLmluY2x1ZGVzKCJJdGVtICIpKSB7CiAgICAgICAgICAgIGNvbnRpbnVlOwogICAgICAgICAgfQogICAgICAgICAgY29uc3Qgd2luZG93c19uYW5vID0gMWU3OwogICAgICAgICAgY29uc3Qgc2Vjb25kc190b191bml4ID0gMTE2NDQ0NzM2MDA7CiAgICAgICAgICBjb25zdCBmaWxldGltZSA9IHBhcnNlSW50KAogICAgICAgICAgICB2YWx1ZS5kYXRhLnNwbGl0KCJbVCIpWzFdLnNwbGl0KCJdIilbMF0sCiAgICAgICAgICAgIDE2CiAgICAgICAgICApOwogICAgICAgICAgY29uc3QgdW5peGVwb2NoID0gZmlsZXRpbWUgLyB3aW5kb3dzX25hbm8gLSBzZWNvbmRzX3RvX3VuaXg7CiAgICAgICAgICBjb25zdCBtcnUgPSB7CiAgICAgICAgICAgIGZpbGVfcGF0aDogdmFsdWUuZGF0YS5zcGxpdCgiKiIpWzFdLAogICAgICAgICAgICByZWdfcGF0aDogcmVnX2VudHJ5LnBhdGgsCiAgICAgICAgICAgIGxhc3Rfb3BlbmVkOiB1bml4ZXBvY2gsCiAgICAgICAgICAgIGxhc3Rfb3BlbmVkX2ZpbGV0aW1lOiBmaWxldGltZSwKICAgICAgICAgICAgcmVnX2ZpbGVfcGF0aDogcGF0aAogICAgICAgICAgfTsKICAgICAgICAgIG9mZmljZV9hcnJheS5wdXNoKG1ydSk7CiAgICAgICAgfQogICAgICB9CiAgICB9IGNhdGNoIChfZSkgewogICAgICBjb250aW51ZTsKICAgIH0KICB9CiAgcmV0dXJuIG9mZmljZV9hcnJheTsKfQptYWluKCk7Cg=="
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "recent_files"
# Parses all recent accessed files (shortcuts/lnk files) for all users
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9zaG9ydGN1dHMudHMKZnVuY3Rpb24gZ2V0TG5rRmlsZShwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X2xua19maWxlKHBhdGgpOwogIGNvbnN0IHJlc3VsdHMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiByZXN1bHRzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIGh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9wdWZmeWNpZC9hcnRlbWlzLWFwaS9tYXN0ZXIvc3JjL2ZpbGVzeXN0ZW0vZGlyZWN0b3J5LnRzCmFzeW5jIGZ1bmN0aW9uIHJlYWREaXIocGF0aCkgewogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKGF3YWl0IGZzLnJlYWREaXIocGF0aCkpOwogIHJldHVybiBkYXRhOwp9CgovLyBtYWluLnRzCmFzeW5jIGZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgZHJpdmUgPSBnZXRFbnZWYWx1ZSgiU3lzdGVtRHJpdmUiKTsKICBpZiAoZHJpdmUgPT09ICIiKSB7CiAgICByZXR1cm4gW107CiAgfQogIGNvbnN0IHVzZXJzID0gYCR7ZHJpdmV9XFxVc2Vyc2A7CiAgY29uc3QgcmVjZW50X2ZpbGVzID0gW107CiAgZm9yIChjb25zdCBlbnRyeSBvZiBhd2FpdCByZWFkRGlyKHVzZXJzKSkgewogICAgdHJ5IHsKICAgICAgY29uc3QgcGF0aCA9IGAke3VzZXJzfVxcJHtlbnRyeS5maWxlbmFtZX1cXEFwcERhdGFcXFJvYW1pbmdcXE1pY3Jvc29mdFxcV2luZG93c1xcUmVjZW50YDsKICAgICAgZm9yIChjb25zdCBlbnRyeTIgb2YgYXdhaXQgcmVhZERpcihwYXRoKSkgewogICAgICAgIGlmICghZW50cnkyLmZpbGVuYW1lLmVuZHNXaXRoKCJsbmsiKSkgewogICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGNvbnN0IGxua19maWxlID0gYCR7cGF0aH1cXCR7ZW50cnkyLmZpbGVuYW1lfWA7CiAgICAgICAgY29uc3QgbG5rID0gZ2V0TG5rRmlsZShsbmtfZmlsZSk7CiAgICAgICAgcmVjZW50X2ZpbGVzLnB1c2gobG5rKTsKICAgICAgfQogICAgfSBjYXRjaCAoX2Vycm9yKSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogIH0KICByZXR1cm4gcmVjZW50X2ZpbGVzOwp9Cm1haW4oKTsK"
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "logons_7_days"
# Pulls back all logons within the past seven (7) days (logoffs are not included)
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9ldmVudGxvZ3MudHMKZnVuY3Rpb24gZ2V0RXZlbnRsb2dzKHBhdGgpIHsKICBjb25zdCByZXN1bHRzID0gRGVuby5jb3JlLm9wcy5nZXRfZXZlbnRsb2dzKHBhdGgpOwogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKHJlc3VsdHMpOwogIHJldHVybiBkYXRhOwp9CgovLyBtYWluLnRzCmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgcGF0aCA9ICJDOlxcV2luZG93c1xcU3lzdGVtMzJcXHdpbmV2dFxcTG9nc1xcU2VjdXJpdHkuZXZ0eCI7CiAgY29uc3QgcmVjb3JkcyA9IGdldEV2ZW50bG9ncyhwYXRoKTsKICBjb25zdCBsb2dvbnMgPSBbXTsKICBjb25zdCB0aW1lX25vdyA9IG5ldyBEYXRlKCk7CiAgY29uc3QgbWlsbGlzZWNvbmRzID0gdGltZV9ub3cuZ2V0VGltZSgpOwogIGNvbnN0IG5hbm9zZWNvbmRzID0gbWlsbGlzZWNvbmRzICogMWU2OwogIGNvbnN0IHNldmVuX2RheXMgPSA2MDQ4ZTExOwogIGNvbnN0IHN0YXJ0X2xvZ29ucyA9IG5hbm9zZWNvbmRzIC0gc2V2ZW5fZGF5czsKICBmb3IgKGNvbnN0IHJlY29yZCBvZiByZWNvcmRzKSB7CiAgICBpZiAocmVjb3JkLmRhdGFbIkV2ZW50Il1bIlN5c3RlbSJdWyJFdmVudElEIl0gIT0gNDYyNCAmJiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiU3lzdGVtIl1bIkV2ZW50SUQiXVsiI3RleHQiXSAhPSA0NjI0KSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogICAgaWYgKHJlY29yZC50aW1lc3RhbXAgPCBzdGFydF9sb2dvbnMpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBjb25zdCBlbnRyeSA9IHsKICAgICAgdGltZXN0YW1wOiByZWNvcmQudGltZXN0YW1wLAogICAgICB0YXJnZXRfc2lkOiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlRhcmdldFVzZXJTaWQiXSwKICAgICAgdGFyZ2V0X3VzZXJuYW1lOiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlRhcmdldFVzZXJOYW1lIl0sCiAgICAgIHRhcmdldF9kb21haW46IHJlY29yZC5kYXRhWyJFdmVudCJdWyJFdmVudERhdGEiXVsiVGFyZ2V0RG9tYWluTmFtZSJdLAogICAgICB0eXBlOiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIkxvZ29uVHlwZSJdLAogICAgICBob3N0bmFtZTogcmVjb3JkLmRhdGFbIkV2ZW50Il1bIkV2ZW50RGF0YSJdWyJXb3Jrc3RhdGlvbk5hbWUiXSwKICAgICAgaXBfYWRkcmVzczogcmVjb3JkLmRhdGFbIkV2ZW50Il1bIkV2ZW50RGF0YSJdWyJJcEFkZHJlc3MiXSwKICAgICAgcHJvY2Vzc19uYW1lOiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlByb2Nlc3NOYW1lIl0sCiAgICAgIHJhdzogcmVjb3JkCiAgICB9OwogICAgbG9nb25zLnB1c2goZW50cnkpOwogIH0KICByZXR1cm4gbG9nb25zOwp9Cm1haW4oKTsK"
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "powershell"
# Parses PowerShell logs looking for EIDs 400,4104,4103, and 800
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9ldmVudGxvZ3MudHMKZnVuY3Rpb24gZ2V0RXZlbnRsb2dzKHBhdGgpIHsKICBjb25zdCByZXN1bHRzID0gRGVuby5jb3JlLm9wcy5nZXRfZXZlbnRsb2dzKHBhdGgpOwogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKHJlc3VsdHMpOwogIHJldHVybiBkYXRhOwp9CgovLyBtYWluLnRzCmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgcGF0aHMgPSBbCiAgICAiQzpcXFdpbmRvd3NcXFN5c3RlbTMyXFx3aW5ldnRcXExvZ3NcXFdpbmRvd3MgUG93ZXJTaGVsbC5ldnR4IiwKICAgICJDOlxcV2luZG93c1xcU3lzdGVtMzJcXHdpbmV2dFxcTG9nc1xcTWljcm9zb2Z0LVdpbmRvd3MtUG93ZXJTaGVsbCU0T3BlcmF0aW9uYWwuZXZ0eCIKICBdOwogIGNvbnN0IGJsb2NrcyA9IFtdOwogIGNvbnN0IHBvd2VyID0gW107CiAgY29uc3QgZWlkcyA9IFs0MDAsIDgwMCwgNDEwNCwgNDEwM107CiAgZm9yIChjb25zdCBwYXRoIG9mIHBhdGhzKSB7CiAgICBjb25zdCByZWNvcmRzID0gZ2V0RXZlbnRsb2dzKHBhdGgpOwogICAgZm9yIChjb25zdCByZWNvcmQgb2YgcmVjb3JkcykgewogICAgICBpZiAoIWVpZHMuaW5jbHVkZXMocmVjb3JkLmRhdGFbIkV2ZW50Il1bIlN5c3RlbSJdWyJFdmVudElEIl0pICYmICFlaWRzLmluY2x1ZGVzKHJlY29yZC5kYXRhWyJFdmVudCJdWyJTeXN0ZW0iXVsiRXZlbnRJRCJdWyIjdGV4dCJdKSkgewogICAgICAgIGNvbnRpbnVlOwogICAgICB9CiAgICAgIGlmIChwYXRoLmluY2x1ZGVzKCJXaW5kb3dzIFBvd2VyU2hlbGwuZXZ0eCIpKSB7CiAgICAgICAgY29uc3QgcG93ZXJzaGVsbCA9IHsKICAgICAgICAgIHRpbWVzdGFtcDogcmVjb3JkLnRpbWVzdGFtcCwKICAgICAgICAgIGRhdGE6IHJlY29yZC5kYXRhWyJFdmVudCJdWyJFdmVudERhdGEiXVsiRGF0YSJdWyIjdGV4dCJdLAogICAgICAgICAgcmF3OiByZWNvcmQKICAgICAgICB9OwogICAgICAgIHBvd2VyLnB1c2gocG93ZXJzaGVsbCk7CiAgICAgIH0gZWxzZSB7CiAgICAgICAgY29uc3QgYmxvY2sgPSB7CiAgICAgICAgICB0aW1lc3RhbXA6IHJlY29yZC50aW1lc3RhbXAsCiAgICAgICAgICBwYXRoOiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlBhdGgiXSwKICAgICAgICAgIHRleHQ6IHJlY29yZC5kYXRhWyJFdmVudCJdWyJFdmVudERhdGEiXVsiU2NyaXB0QmxvY2tUZXh0Il0sCiAgICAgICAgICBpZDogcmVjb3JkLmRhdGFbIkV2ZW50Il1bIkV2ZW50RGF0YSJdWyJTY3JpcHRCbG9ja0lkIl0sCiAgICAgICAgICByYXc6IHJlY29yZAogICAgICAgIH07CiAgICAgICAgYmxvY2tzLnB1c2goYmxvY2spOwogICAgICB9CiAgICB9CiAgfQogIGNvbnN0IGxvZ3MgPSB7CiAgICBzY3JpcHRibG9ja3M6IGJsb2NrcywKICAgIHBvd2Vyc2hlbGw6IHBvd2VyCiAgfTsKICByZXR1cm4gbG9nczsKfQptYWluKCk7Cg=="
macOS TOML collection focusing on quickly collecting data related to a macOS alert.
system = "macos"
[output]
name = "triage_collection"
directory = "./tmp"
format = "json"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
filter_name = "unifiedlogs_fsevents_filter"
# Filter for all logs and FsEvents that contain ".dmg" or "Downloads
filter_script="ZnVuY3Rpb24gZmlsdGVyTG9ncyhkYXRhKSB7CiAgY29uc3QgbG9ncyA9IFtdOwogIGNvbnN0IGxvZ0RhdGEgPSBKU09OLnBhcnNlKGRhdGEpOwogIGZvciAobGV0IGVudHJ5ID0gMDsgZW50cnkgPCBsb2dEYXRhLmxlbmd0aDsgZW50cnkrKykgewogICAgaWYgKCFsb2dEYXRhW2VudHJ5XS5tZXNzYWdlLmluY2x1ZGVzKCJEb3dubG9hZHMiKSAmJiAhbG9nRGF0YVtlbnRyeV0ubWVzc2FnZS5pbmNsdWRlcygiLmRtZyIpKSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogICAgbG9ncy5wdXNoKGxvZ0RhdGFbZW50cnldKTsKICB9CiAgcmV0dXJuIGxvZ3M7Cn0KZnVuY3Rpb24gZmlsdGVyRXZlbnRzKGRhdGEpIHsKICBjb25zdCBldmVudHMgPSBbXTsKICBjb25zdCBldmVudHNEYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICBmb3IgKGNvbnN0IGVudHJ5IG9mIGV2ZW50c0RhdGEpIHsKICAgIGlmICghZW50cnkucGF0aC5pbmNsdWRlcygiLmRtZyIpICYmICFlbnRyeS5wYXRoLmluY2x1ZGVzKCJEb3dubG9hZHMiKSkgewogICAgICBjb250aW51ZTsKICAgIH0KICAgIGV2ZW50cy5wdXNoKGVudHJ5KTsKICB9CiAgcmV0dXJuIGV2ZW50czsKfQoKZnVuY3Rpb24gbWFpbigpIHsKICBjb25zdCBhcmdzOiBzdHJpbmdbXSA9IFNUQVRJQ19BUkdTOwogIGlmIChhcmdzLmxlbmd0aCAhPSAyKSB7CiAgICByZXR1cm4gIm1pc3NpbmcgYXJncyIKICB9CiAgaWYgKGFyZ3NbMV0gPT09ICJ1bmlmaWVkbG9ncyIpIHsKICAgIHJldHVybiBmaWx0ZXJMb2dzKGFyZ3NbMF0pOwogIH0KICBpZiAoYXJnc1sxXSA9PT0gImZzZXZlbnRzZCIpIHsKICAgIHJldHVybiBmaWx0ZXJFdmVudHMoYXJnc1swXSk7CiAgfQoKICByZXR1cm4gSlNPTi5wYXJzZShhcmdzWzBdKTsKfQptYWluKCk7Cg=="
[[artifacts]]
artifact_name = "processes"
[artifacts.processes]
md5 = true
sha1 = false
sha256 = false
metadata = true
[[artifacts]]
artifact_name = "unifiedlogs"
filter = true
[artifacts.unifiedlogs]
sources = ["Persist"]
[[artifacts]]
artifact_name = "fseventsd"
filter = true
[[artifacts]]
artifact_name = "chromium-history"
[[artifacts]]
artifact_name = "chromium-downloads"
[[artifacts]]
artifact_name = "firefox-history"
[[artifacts]]
artifact_name = "firefox-downloads"
[[artifacts]]
artifact_name = "safari-history"
[[artifacts]]
artifact_name = "safari-downloads"
Live Response
Windows TOML collection focusing on collecting data to help investigate a Windows incident.
system = "windows"
[output]
name = "windows_collection"
directory = "./tmp"
format = "jsonl"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "prefetch"
[artifacts.prefetch]
[[artifacts]]
artifact_name = "processes"
[artifacts.processes]
md5 = true
sha1 = false
sha256 = false
metadata = true
[[artifacts]]
artifact_name = "systeminfo"
[[artifacts]]
artifact_name = "chromium-history"
[[artifacts]]
artifact_name = "chromium-downloads"
[[artifacts]]
artifact_name = "firefox-history"
[[artifacts]]
artifact_name = "firefox-downloads"
[[artifacts]]
artifact_name = "amcache"
[artifacts.amcache]
[[artifacts]]
artifact_name = "bits"
[artifacts.bits]
carve = true
[[artifacts]]
artifact_name = "eventlogs"
[artifacts.eventlogs]
[[artifacts]]
artifact_name = "rawfiles"
[artifacts.rawfiles]
drive_letter = 'C'
start_path = "C:\\"
depth = 40
recover_indx = true
md5 = true
sha1 = false
sha256 = false
metadata = true
[[artifacts]]
artifact_name = "registry" # Parses the whole Registry file
[artifacts.registry]
user_hives = true # All NTUSER.DAT and UsrClass.dat
system_hives = true # SYSTEM, SOFTWARE, SAM, SECURITY
[[artifacts]]
artifact_name = "shellbags"
[artifacts.shellbags]
resolve_guids = true
[[artifacts]]
artifact_name = "shimcache"
[artifacts.shimcache]
[[artifacts]]
artifact_name = "srum"
[artifacts.srum]
[[artifacts]]
artifact_name = "userassist"
[artifacts.userassist]
[[artifacts]]
artifact_name = "users"
[artifacts.users]
[[artifacts]]
artifact_name = "usnjrnl"
[artifacts.usnjrnl]
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "recent_files"
# Parses all recent accessed files (shortcuts/lnk files) for all users
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9zaG9ydGN1dHMudHMKZnVuY3Rpb24gZ2V0TG5rRmlsZShwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X2xua19maWxlKHBhdGgpOwogIGNvbnN0IHJlc3VsdHMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiByZXN1bHRzOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIGh0dHBzOi8vcmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbS9wdWZmeWNpZC9hcnRlbWlzLWFwaS9tYXN0ZXIvc3JjL2ZpbGVzeXN0ZW0vZGlyZWN0b3J5LnRzCmFzeW5jIGZ1bmN0aW9uIHJlYWREaXIocGF0aCkgewogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKGF3YWl0IGZzLnJlYWREaXIocGF0aCkpOwogIHJldHVybiBkYXRhOwp9CgovLyBtYWluLnRzCmFzeW5jIGZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgZHJpdmUgPSBnZXRFbnZWYWx1ZSgiU3lzdGVtRHJpdmUiKTsKICBpZiAoZHJpdmUgPT09ICIiKSB7CiAgICByZXR1cm4gW107CiAgfQogIGNvbnN0IHVzZXJzID0gYCR7ZHJpdmV9XFxVc2Vyc2A7CiAgY29uc3QgcmVjZW50X2ZpbGVzID0gW107CiAgZm9yIChjb25zdCBlbnRyeSBvZiBhd2FpdCByZWFkRGlyKHVzZXJzKSkgewogICAgdHJ5IHsKICAgICAgY29uc3QgcGF0aCA9IGAke3VzZXJzfVxcJHtlbnRyeS5maWxlbmFtZX1cXEFwcERhdGFcXFJvYW1pbmdcXE1pY3Jvc29mdFxcV2luZG93c1xcUmVjZW50YDsKICAgICAgZm9yIChjb25zdCBlbnRyeTIgb2YgYXdhaXQgcmVhZERpcihwYXRoKSkgewogICAgICAgIGlmICghZW50cnkyLmZpbGVuYW1lLmVuZHNXaXRoKCJsbmsiKSkgewogICAgICAgICAgY29udGludWU7CiAgICAgICAgfQogICAgICAgIGNvbnN0IGxua19maWxlID0gYCR7cGF0aH1cXCR7ZW50cnkyLmZpbGVuYW1lfWA7CiAgICAgICAgY29uc3QgbG5rID0gZ2V0TG5rRmlsZShsbmtfZmlsZSk7CiAgICAgICAgcmVjZW50X2ZpbGVzLnB1c2gobG5rKTsKICAgICAgfQogICAgfSBjYXRjaCAoX2Vycm9yKSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogIH0KICByZXR1cm4gcmVjZW50X2ZpbGVzOwp9Cm1haW4oKTsK"
macOS colleciton focusing on collecting data to help investigate a macOS incident.
system = "macos"
[output]
name = "macos_collection"
directory = "./tmp"
format = "jsonl"
compress = true
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "processes"
[artifacts.processes]
md5 = true
sha1 = false
sha256 = false
metadata = true
[[artifacts]]
artifact_name = "loginitems"
[[artifacts]]
artifact_name = "emond"
[[artifacts]]
artifact_name = "fseventsd"
[[artifacts]]
artifact_name = "launchd"
[[artifacts]]
artifact_name = "files"
[artifacts.files]
start_path = "/"
depth = 90
metadata = true
md5 = true
sha1 = false
sha256 = false
regex_filter = ""
[[artifacts]]
artifact_name = "users"
[[artifacts]]
artifact_name = "groups"
[[artifacts]]
artifact_name = "systeminfo"
[[artifacts]]
artifact_name = "shell_history"
[[artifacts]]
artifact_name = "chromium-history"
[[artifacts]]
artifact_name = "chromium-downloads"
[[artifacts]]
artifact_name = "firefox-history"
[[artifacts]]
artifact_name = "firefox-downloads"
[[artifacts]]
artifact_name = "safari-history"
[[artifacts]]
artifact_name = "safari-downloads"
[[artifacts]]
artifact_name = "cron"
[[artifacts]]
artifact_name = "unifiedlogs"
[artifacts.unifiedlogs]
sources = ["Persist", "Special", "Signpost", "HighVolume"] # Option to specify the log directories (sources)
File Listings
Windows TOML collection looking for all files created in the last 14 days
system = "windows"
[output]
name = "recent_files"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
filter_name = "recently_created_files"
# This script will take the search artifact below and filter it to only return files that were created in the past 14 days
filter_script = "Ly8gbWFpbi50cwpmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IGFyZ3MgPSBTVEFUSUNfQVJHUzsKICBpZiAoYXJncy5sZW5ndGggPT09IDApIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgZGF0YSA9IEpTT04ucGFyc2UoYXJnc1swXSk7CiAgY29uc3QgdGltZV9ub3cgPSBuZXcgRGF0ZSgpOwogIGNvbnN0IG1pbGxpc2Vjb25kcyA9IHRpbWVfbm93LmdldFRpbWUoKTsKICBjb25zdCBzZWNvbmRzID0gbWlsbGlzZWNvbmRzIC8gMWUzOwogIGNvbnN0IGZvdXJ0ZWVuX2RheXMgPSAxMjA5NjAwOwogIGNvbnN0IGVhcmxpZXN0X3N0YXJ0ID0gc2Vjb25kcyAtIGZvdXJ0ZWVuX2RheXM7CiAgY29uc3QgZmlsdGVyX2RhdGEgPSBbXTsKICBmb3IgKGNvbnN0IGVudHJ5IG9mIGRhdGEpIHsKICAgIGlmIChlbnRyeS5jcmVhdGVkIDwgZWFybGllc3Rfc3RhcnQpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBmaWx0ZXJfZGF0YS5wdXNoKGVudHJ5KTsKICB9CiAgcmV0dXJuIGZpbHRlcl9kYXRhOwp9Cm1haW4oKTsK"
[[artifacts]]
artifact_name = "files" # Name of artifact
filter = true
[artifacts.files]
start_path = "C:\\" # Start of file listing
depth = 100 # How many sub directories to descend
macOS TOML collection looking for all files created in the last 14 days
system = "macos"
[output]
name = "recent_files"
directory = "./tmp"
format = "jsonl"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
filter_name = "recently_created_files"
# This script will take the search artifact below and filter it to only return files that were created in the past 14 days
filter_script = "Ly8gbWFpbi50cwpmdW5jdGlvbiBtYWluKCkgewogIGNvbnN0IGFyZ3MgPSBTVEFUSUNfQVJHUzsKICBpZiAoYXJncy5sZW5ndGggPT09IDApIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgZGF0YSA9IEpTT04ucGFyc2UoYXJnc1swXSk7CiAgY29uc3QgdGltZV9ub3cgPSBuZXcgRGF0ZSgpOwogIGNvbnN0IG1pbGxpc2Vjb25kcyA9IHRpbWVfbm93LmdldFRpbWUoKTsKICBjb25zdCBzZWNvbmRzID0gbWlsbGlzZWNvbmRzIC8gMWUzOwogIGNvbnN0IGZvdXJ0ZWVuX2RheXMgPSAxMjA5NjAwOwogIGNvbnN0IGVhcmxpZXN0X3N0YXJ0ID0gc2Vjb25kcyAtIGZvdXJ0ZWVuX2RheXM7CiAgY29uc3QgZmlsdGVyX2RhdGEgPSBbXTsKICBmb3IgKGNvbnN0IGVudHJ5IG9mIGRhdGEpIHsKICAgIGlmIChlbnRyeS5jcmVhdGVkIDwgZWFybGllc3Rfc3RhcnQpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBmaWx0ZXJfZGF0YS5wdXNoKGVudHJ5KTsKICB9CiAgcmV0dXJuIGZpbHRlcl9kYXRhOwp9Cm1haW4oKTsK"
[[artifacts]]
artifact_name = "files" # Name of artifact
filter = true
[artifacts.files]
start_path = "/" # Start of file listing
depth = 100 # How many sub directories to descend
Scripts
A Windows collection script that does tha following:
- Parses and filters user Registry Run\RunOnce keys that contain the values:
["cmd.exe", "powershell", "temp", "appdata", "script"]
- Parses and filters the System Event Log for service installs that contain the
values:
[".bat", "powershell", "cmd.exe", "COMSPEC"]
- Parses and filters BITS jobs looking for uncommon BITS jobs
system = "windows"
[output]
name = "win_filter"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
filter_name = "regs_bits"
# This script will filter for sus Run\RunOnce Reg keys and non-builtin BITS jobs
filter_script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9ldmVudGxvZ3MudHMKZnVuY3Rpb24gZ2V0RXZlbnRsb2dzKHBhdGgpIHsKICBjb25zdCByZXN1bHRzID0gRGVuby5jb3JlLm9wcy5nZXRfZXZlbnRsb2dzKHBhdGgpOwogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKHJlc3VsdHMpOwogIHJldHVybiBkYXRhOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIG1haW4udHMKZnVuY3Rpb24gZ3JhYkV2ZW50TG9ncygpIHsKICBjb25zdCBkcml2ZSA9IGdldEVudlZhbHVlKCJTeXN0ZW1Ecml2ZSIpOwogIGlmIChkcml2ZSA9PT0gIiIpIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgZGF0YSA9IGdldEV2ZW50bG9ncygKICAgIGAke2RyaXZlfVxcV2luZG93c1xcU3lzdGVtMzJcXHdpbmV2dFxcTG9nc1xcU3lzdGVtLmV2dHhgCiAgKTsKICBjb25zdCBzZXJ2aWNlX2luc3RhbGxzID0gW107CiAgY29uc3Qgc3VzX3NlcnZpY2VzID0gWyIuYmF0IiwgInBvd2Vyc2hlbGwiLCAiY21kLmV4ZSIsICJDT01TUEVDIl07CiAgZm9yIChjb25zdCByZWNvcmQgb2YgZGF0YSkgewogICAgaWYgKHJlY29yZC5kYXRhWyJFdmVudCJdWyJTeXN0ZW0iXVsiRXZlbnRJRCJdICE9IDcwNDUgJiYgcmVjb3JkLmRhdGFbIkV2ZW50Il1bIlN5c3RlbSJdWyJFdmVudElEIl1bIiN0ZXh0Il0gIT0gNzA0NSkgewogICAgICBjb250aW51ZTsKICAgIH0KICAgIGlmIChyZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlNlcnZpY2VOYW1lIl0ubGVuZ3RoID09PSAxNiB8fCBzdXNfc2VydmljZXMuc29tZSgKICAgICAgKHZhbHVlKSA9PiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIkltYWdlUGF0aCJdLnRvTG93ZXJDYXNlKCkuaW5jbHVkZXModmFsdWUpCiAgICApKSB7CiAgICAgIHNlcnZpY2VfaW5zdGFsbHMucHVzaChyZWNvcmQpOwogICAgfQogIH0KICByZXR1cm4gc2VydmljZV9pbnN0YWxsczsKfQpmdW5jdGlvbiBmaWx0ZXJSZWdpc3RyeShkYXRhKSB7CiAgY29uc3QgcmVncyA9IEpTT04ucGFyc2UoZGF0YSk7CiAgY29uc3Qgc3VzX3J1bl9rZXlzID0gWyJjbWQuZXhlIiwgInBvd2Vyc2hlbGwiLCAidGVtcCIsICJhcHBkYXRhIiwgInNjcmlwdCJdOwogIGNvbnN0IHN1c19oaXQgPSB7CiAgICByZWdpc3RyeV9maWxlOiByZWdzLnJlZ2lzdHJ5X2ZpbGUsCiAgICByZWdpc3RyeV9wYXRoOiByZWdzLnJlZ2lzdHJ5X3BhdGgsCiAgICByZWdpc3RyeV9lbnRyaWVzOiBbXQogIH07CiAgZm9yIChjb25zdCByZWNvcmQgb2YgcmVncy5yZWdpc3RyeV9lbnRyaWVzKSB7CiAgICBpZiAocmVjb3JkLm5hbWUgPT09ICJSdW4iIHx8IHJlY29yZC5uYW1lID09PSAiUnVuT25jZSIpIHsKICAgICAgY29uc3QgcmVnX2hpdCA9IHsKICAgICAgICBrZXk6IHJlY29yZC5rZXksCiAgICAgICAgbmFtZTogcmVjb3JkLm5hbWUsCiAgICAgICAgcGF0aDogcmVjb3JkLnBhdGgsCiAgICAgICAgdmFsdWVzOiBbXSwKICAgICAgICBsYXN0X21vZGlmaWVkOiByZWNvcmQubGFzdF9tb2RpZmllZCwKICAgICAgICBkZXB0aDogcmVjb3JkLmRlcHRoCiAgICAgIH07CiAgICAgIGZvciAoY29uc3QgdmFsdWUgb2YgcmVjb3JkLnZhbHVlcykgewogICAgICAgIGlmIChzdXNfcnVuX2tleXMuc29tZSgKICAgICAgICAgIChyZWdfdmFsdWUpID0+IHZhbHVlLmRhdGEudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyhyZWdfdmFsdWUpCiAgICAgICAgKSkgewogICAgICAgICAgcmVnX2hpdC52YWx1ZXMucHVzaCh2YWx1ZSk7CiAgICAgICAgfQogICAgICB9CiAgICAgIGlmIChyZWdfaGl0LnZhbHVlcy5sZW5ndGggPT09IDApIHsKICAgICAgICBjb250aW51ZTsKICAgICAgfQogICAgICBzdXNfaGl0LnJlZ2lzdHJ5X2VudHJpZXMucHVzaChyZWdfaGl0KTsKICAgIH0KICB9CiAgcmV0dXJuIHN1c19oaXQ7Cn0KZnVuY3Rpb24gZmlsdGVyQml0cyhkYXRhKSB7CiAgY29uc3QgYml0c19kYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICBjb25zdCBzdXNfYml0cyA9IHsKICAgIGJpdHM6IFtdLAogICAgY2FydmVkX2ZpbGVzOiBbXSwKICAgIGNhcnZlZF9qb2JzOiBbXQogIH07CiAgY29uc3Qgc3RhbmRhcmRfYml0cyA9IFsKICAgICJtb3ppbGxhIiwKICAgICJvdXRsb29rIiwKICAgICJlZGdlIiwKICAgICJvbmVkcml2ZSIsCiAgICAiZ29vZ2xlIiwKICAgICJzcGVlY2giCiAgXTsKICBmb3IgKGNvbnN0IGJpdCBvZiBiaXRzX2RhdGEuYml0cykgewogICAgaWYgKCFzdGFuZGFyZF9iaXRzLnNvbWUoCiAgICAgICh2YWx1ZSkgPT4gYml0LmZ1bGxfcGF0aC50b0xvd2VyQ2FzZSgpLmluY2x1ZGVzKHZhbHVlKQogICAgKSAmJiAhc3RhbmRhcmRfYml0cy5zb21lKCh2YWx1ZSkgPT4gYml0LnVybC50b0xvd2VyQ2FzZSgpLmluY2x1ZGVzKHZhbHVlKSkpIHsKICAgICAgc3VzX2JpdHMuYml0cy5wdXNoKGJpdCk7CiAgICB9CiAgfQogIGZvciAoY29uc3QgYml0IG9mIGJpdHNfZGF0YS5jYXJ2ZWRfZmlsZXMpIHsKICAgIGlmICghc3RhbmRhcmRfYml0cy5zb21lKAogICAgICAodmFsdWUpID0+IGJpdC5mdWxsX3BhdGgudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyh2YWx1ZSkKICAgICkgJiYgIXN0YW5kYXJkX2JpdHMuc29tZSgodmFsdWUpID0+IGJpdC51cmwudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyh2YWx1ZSkpKSB7CiAgICAgIHN1c19iaXRzLmNhcnZlZF9maWxlcy5wdXNoKGJpdCk7CiAgICB9CiAgfQogIGZvciAoY29uc3QgYml0IG9mIGJpdHNfZGF0YS5jYXJ2ZWRfam9icykgewogICAgaWYgKCFzdGFuZGFyZF9iaXRzLnNvbWUoCiAgICAgICh2YWx1ZSkgPT4gYml0LnRhcmdldF9wYXRoLnRvTG93ZXJDYXNlKCkuaW5jbHVkZXModmFsdWUpCiAgICApKSB7CiAgICAgIHN1c19iaXRzLmNhcnZlZF9qb2JzLnB1c2goYml0KTsKICAgIH0KICB9CiAgcmV0dXJuIHN1c19iaXRzOwp9CmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgYXJncyA9IFNUQVRJQ19BUkdTOwogIGlmIChhcmdzLmxlbmd0aCA8IDIpIHsKICAgIHJldHVybiBncmFiRXZlbnRMb2dzKCk7CiAgfQogIGlmIChhcmdzWzFdID09PSAicmVnaXN0cnkiKSB7CiAgICByZXR1cm4gZmlsdGVyUmVnaXN0cnkoYXJnc1swXSk7CiAgfQogIGlmIChhcmdzWzFdID09PSAiYml0cyIpIHsKICAgIHJldHVybiBmaWx0ZXJCaXRzKGFyZ3NbMF0pOwogIH0KICByZXR1cm4gSlNPTi5wYXJzZShhcmdzWzBdKTsKfQptYWluKCk7Cg=="
[[artifacts]]
artifact_name = "bits"
filter = true
[artifacts.bits]
carve = true
[[artifacts]]
artifact_name = "registry"
filter = true
[artifacts.registry]
user_hives = true
system_hives = false
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "sus_7045_eids"
# The script below is the same as the filter script. Its coded in a manner that will work as a filter and normal script
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvd2luZG93cy9ldmVudGxvZ3MudHMKZnVuY3Rpb24gZ2V0RXZlbnRsb2dzKHBhdGgpIHsKICBjb25zdCByZXN1bHRzID0gRGVuby5jb3JlLm9wcy5nZXRfZXZlbnRsb2dzKHBhdGgpOwogIGNvbnN0IGRhdGEgPSBKU09OLnBhcnNlKHJlc3VsdHMpOwogIHJldHVybiBkYXRhOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9lbnZpcm9ubWVudC9lbnYudHMKZnVuY3Rpb24gZ2V0RW52VmFsdWUoa2V5KSB7CiAgY29uc3QgZGF0YSA9IGVudi5lbnZpcm9ubWVudFZhbHVlKGtleSk7CiAgcmV0dXJuIGRhdGE7Cn0KCi8vIG1haW4udHMKZnVuY3Rpb24gZ3JhYkV2ZW50TG9ncygpIHsKICBjb25zdCBkcml2ZSA9IGdldEVudlZhbHVlKCJTeXN0ZW1Ecml2ZSIpOwogIGlmIChkcml2ZSA9PT0gIiIpIHsKICAgIHJldHVybiBbXTsKICB9CiAgY29uc3QgZGF0YSA9IGdldEV2ZW50bG9ncygKICAgIGAke2RyaXZlfVxcV2luZG93c1xcU3lzdGVtMzJcXHdpbmV2dFxcTG9nc1xcU3lzdGVtLmV2dHhgCiAgKTsKICBjb25zdCBzZXJ2aWNlX2luc3RhbGxzID0gW107CiAgY29uc3Qgc3VzX3NlcnZpY2VzID0gWyIuYmF0IiwgInBvd2Vyc2hlbGwiLCAiY21kLmV4ZSIsICJDT01TUEVDIl07CiAgZm9yIChjb25zdCByZWNvcmQgb2YgZGF0YSkgewogICAgaWYgKHJlY29yZC5kYXRhWyJFdmVudCJdWyJTeXN0ZW0iXVsiRXZlbnRJRCJdICE9IDcwNDUgJiYgcmVjb3JkLmRhdGFbIkV2ZW50Il1bIlN5c3RlbSJdWyJFdmVudElEIl1bIiN0ZXh0Il0gIT0gNzA0NSkgewogICAgICBjb250aW51ZTsKICAgIH0KICAgIGlmIChyZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIlNlcnZpY2VOYW1lIl0ubGVuZ3RoID09PSAxNiB8fCBzdXNfc2VydmljZXMuc29tZSgKICAgICAgKHZhbHVlKSA9PiByZWNvcmQuZGF0YVsiRXZlbnQiXVsiRXZlbnREYXRhIl1bIkltYWdlUGF0aCJdLnRvTG93ZXJDYXNlKCkuaW5jbHVkZXModmFsdWUpCiAgICApKSB7CiAgICAgIHNlcnZpY2VfaW5zdGFsbHMucHVzaChyZWNvcmQpOwogICAgfQogIH0KICByZXR1cm4gc2VydmljZV9pbnN0YWxsczsKfQpmdW5jdGlvbiBmaWx0ZXJSZWdpc3RyeShkYXRhKSB7CiAgY29uc3QgcmVncyA9IEpTT04ucGFyc2UoZGF0YSk7CiAgY29uc3Qgc3VzX3J1bl9rZXlzID0gWyJjbWQuZXhlIiwgInBvd2Vyc2hlbGwiLCAidGVtcCIsICJhcHBkYXRhIiwgInNjcmlwdCJdOwogIGNvbnN0IHN1c19oaXQgPSB7CiAgICByZWdpc3RyeV9maWxlOiByZWdzLnJlZ2lzdHJ5X2ZpbGUsCiAgICByZWdpc3RyeV9wYXRoOiByZWdzLnJlZ2lzdHJ5X3BhdGgsCiAgICByZWdpc3RyeV9lbnRyaWVzOiBbXQogIH07CiAgZm9yIChjb25zdCByZWNvcmQgb2YgcmVncy5yZWdpc3RyeV9lbnRyaWVzKSB7CiAgICBpZiAocmVjb3JkLm5hbWUgPT09ICJSdW4iIHx8IHJlY29yZC5uYW1lID09PSAiUnVuT25jZSIpIHsKICAgICAgY29uc3QgcmVnX2hpdCA9IHsKICAgICAgICBrZXk6IHJlY29yZC5rZXksCiAgICAgICAgbmFtZTogcmVjb3JkLm5hbWUsCiAgICAgICAgcGF0aDogcmVjb3JkLnBhdGgsCiAgICAgICAgdmFsdWVzOiBbXSwKICAgICAgICBsYXN0X21vZGlmaWVkOiByZWNvcmQubGFzdF9tb2RpZmllZCwKICAgICAgICBkZXB0aDogcmVjb3JkLmRlcHRoCiAgICAgIH07CiAgICAgIGZvciAoY29uc3QgdmFsdWUgb2YgcmVjb3JkLnZhbHVlcykgewogICAgICAgIGlmIChzdXNfcnVuX2tleXMuc29tZSgKICAgICAgICAgIChyZWdfdmFsdWUpID0+IHZhbHVlLmRhdGEudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyhyZWdfdmFsdWUpCiAgICAgICAgKSkgewogICAgICAgICAgcmVnX2hpdC52YWx1ZXMucHVzaCh2YWx1ZSk7CiAgICAgICAgfQogICAgICB9CiAgICAgIGlmIChyZWdfaGl0LnZhbHVlcy5sZW5ndGggPT09IDApIHsKICAgICAgICBjb250aW51ZTsKICAgICAgfQogICAgICBzdXNfaGl0LnJlZ2lzdHJ5X2VudHJpZXMucHVzaChyZWdfaGl0KTsKICAgIH0KICB9CiAgcmV0dXJuIHN1c19oaXQ7Cn0KZnVuY3Rpb24gZmlsdGVyQml0cyhkYXRhKSB7CiAgY29uc3QgYml0c19kYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICBjb25zdCBzdXNfYml0cyA9IHsKICAgIGJpdHM6IFtdLAogICAgY2FydmVkX2ZpbGVzOiBbXSwKICAgIGNhcnZlZF9qb2JzOiBbXQogIH07CiAgY29uc3Qgc3RhbmRhcmRfYml0cyA9IFsKICAgICJtb3ppbGxhIiwKICAgICJvdXRsb29rIiwKICAgICJlZGdlIiwKICAgICJvbmVkcml2ZSIsCiAgICAiZ29vZ2xlIiwKICAgICJzcGVlY2giCiAgXTsKICBmb3IgKGNvbnN0IGJpdCBvZiBiaXRzX2RhdGEuYml0cykgewogICAgaWYgKCFzdGFuZGFyZF9iaXRzLnNvbWUoCiAgICAgICh2YWx1ZSkgPT4gYml0LmZ1bGxfcGF0aC50b0xvd2VyQ2FzZSgpLmluY2x1ZGVzKHZhbHVlKQogICAgKSAmJiAhc3RhbmRhcmRfYml0cy5zb21lKCh2YWx1ZSkgPT4gYml0LnVybC50b0xvd2VyQ2FzZSgpLmluY2x1ZGVzKHZhbHVlKSkpIHsKICAgICAgc3VzX2JpdHMuYml0cy5wdXNoKGJpdCk7CiAgICB9CiAgfQogIGZvciAoY29uc3QgYml0IG9mIGJpdHNfZGF0YS5jYXJ2ZWRfZmlsZXMpIHsKICAgIGlmICghc3RhbmRhcmRfYml0cy5zb21lKAogICAgICAodmFsdWUpID0+IGJpdC5mdWxsX3BhdGgudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyh2YWx1ZSkKICAgICkgJiYgIXN0YW5kYXJkX2JpdHMuc29tZSgodmFsdWUpID0+IGJpdC51cmwudG9Mb3dlckNhc2UoKS5pbmNsdWRlcyh2YWx1ZSkpKSB7CiAgICAgIHN1c19iaXRzLmNhcnZlZF9maWxlcy5wdXNoKGJpdCk7CiAgICB9CiAgfQogIGZvciAoY29uc3QgYml0IG9mIGJpdHNfZGF0YS5jYXJ2ZWRfam9icykgewogICAgaWYgKCFzdGFuZGFyZF9iaXRzLnNvbWUoCiAgICAgICh2YWx1ZSkgPT4gYml0LnRhcmdldF9wYXRoLnRvTG93ZXJDYXNlKCkuaW5jbHVkZXModmFsdWUpCiAgICApKSB7CiAgICAgIHN1c19iaXRzLmNhcnZlZF9qb2JzLnB1c2goYml0KTsKICAgIH0KICB9CiAgcmV0dXJuIHN1c19iaXRzOwp9CmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgYXJncyA9IFNUQVRJQ19BUkdTOwogIGlmIChhcmdzLmxlbmd0aCA8IDIpIHsKICAgIHJldHVybiBncmFiRXZlbnRMb2dzKCk7CiAgfQogIGlmIChhcmdzWzFdID09PSAicmVnaXN0cnkiKSB7CiAgICByZXR1cm4gZmlsdGVyUmVnaXN0cnkoYXJnc1swXSk7CiAgfQogIGlmIChhcmdzWzFdID09PSAiYml0cyIpIHsKICAgIHJldHVybiBmaWx0ZXJCaXRzKGFyZ3NbMF0pOwogIH0KICByZXR1cm4gSlNPTi5wYXJzZShhcmdzWzBdKTsKfQptYWluKCk7Cg=="
A macOS collection script that does the following:
- Parses and filters the
Persist
UnifiedLog files for log messags that containsudo
orosascript
- Parses and filters Fseventsd entries for evidence of
.dmg
files or files in/tmp
- Parses and filters an App filelisting to list Applications and their
associated
Info.plist
content - Parses
LoginItems
and try to parse the associated persistence binary (if it exists and is amacho
executable)
The script is coded in a manner so that it can run as a filter or a normal script.
system = "macos"
[output]
name = "mac_filter"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "abdc"
collection_id = 1
output = "local"
filter_name = "unifiedlogs_fsevents_filter"
# This script will filter for unifiedlogs, fseventsd, and files
filter_script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvbG9naW5pdGVtcy50cwpmdW5jdGlvbiBnZXRMb2dpbml0ZW1zKCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9sb2dpbml0ZW1zKCk7CiAgY29uc3QgaXRlbXMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiBpdGVtczsKfQoKLy8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvbWFjaG8udHMKZnVuY3Rpb24gZ2V0TWFjaG8ocGF0aCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9tYWNobyhwYXRoKTsKICBpZiAoZGF0YSA9PT0gIiIpIHsKICAgIHJldHVybiBudWxsOwogIH0KICBjb25zdCBtYWNobyA9IEpTT04ucGFyc2UoZGF0YSk7CiAgcmV0dXJuIG1hY2hvOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9tYWNvcy9wbGlzdC50cwpmdW5jdGlvbiBnZXRQbGlzdChwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X3BsaXN0KHBhdGgpOwogIGlmIChkYXRhID09PSAiIikgewogICAgcmV0dXJuIG51bGw7CiAgfQogIGNvbnN0IHBsaXN0X2RhdGEgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiBwbGlzdF9kYXRhOwp9CgovLyBtYWluLnRzCmZ1bmN0aW9uIGdyYWJMb2dpbkl0ZW1zKCkgewogIGNvbnN0IGRhdGEgPSBnZXRMb2dpbml0ZW1zKCk7CiAgY29uc3QgaXRlbXNNYWNobyA9IFtdOwogIGZvciAoY29uc3QgZW50cnkgb2YgZGF0YSkgewogICAgdHJ5IHsKICAgICAgY29uc3QgaXRlbSA9IHsKICAgICAgICBpdGVtczogZW50cnksCiAgICAgICAgbWFjaG86IGdldE1hY2hvKGVudHJ5LnBhdGguam9pbigiLyIpKQogICAgICB9OwogICAgICBpdGVtc01hY2hvLnB1c2goaXRlbSk7CiAgICB9IGNhdGNoIChfZSkgewogICAgICBjb25zdCBpdGVtID0gewogICAgICAgIGl0ZW1zOiBlbnRyeSwKICAgICAgICBtYWNobzogbnVsbAogICAgICB9OwogICAgICBpdGVtc01hY2hvLnB1c2goaXRlbSk7CiAgICB9CiAgfQogIHJldHVybiBpdGVtc01hY2hvOwp9CmZ1bmN0aW9uIGZpbHRlckxvZ3MoZGF0YSkgewogIGNvbnN0IGxvZ3MgPSBbXTsKICBjb25zdCBsb2dEYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICBmb3IgKGxldCBlbnRyeSA9IDA7IGVudHJ5IDwgbG9nRGF0YS5sZW5ndGg7IGVudHJ5KyspIHsKICAgIGlmICghbG9nRGF0YVtlbnRyeV0ubWVzc2FnZS5pbmNsdWRlcygic3VkbyIpICYmICFsb2dEYXRhW2VudHJ5XS5tZXNzYWdlLmluY2x1ZGVzKCJvc2FzY3JpcHQiKSkgewogICAgICBjb250aW51ZTsKICAgIH0KICAgIGxvZ3MucHVzaChsb2dEYXRhW2VudHJ5XSk7CiAgfQogIHJldHVybiBsb2dzOwp9CmZ1bmN0aW9uIGZpbHRlckV2ZW50cyhkYXRhKSB7CiAgY29uc3QgZXZlbnRzID0gW107CiAgY29uc3QgZXZlbnRzRGF0YSA9IEpTT04ucGFyc2UoZGF0YSk7CiAgZm9yIChjb25zdCBlbnRyeSBvZiBldmVudHNEYXRhKSB7CiAgICBpZiAoIWVudHJ5LnBhdGguaW5jbHVkZXMoIi5kbWciKSAmJiAhZW50cnkucGF0aC5zdGFydHNXaXRoKCIvdG1wIikpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBldmVudHMucHVzaChlbnRyeSk7CiAgfQogIHJldHVybiBldmVudHM7Cn0KZnVuY3Rpb24gZmlsdGVyQXBwcyhkYXRhKSB7CiAgY29uc3QgYXBwcyA9IFtdOwogIGNvbnN0IGZpbGVzRGF0YSA9IEpTT04ucGFyc2UoZGF0YSk7CiAgZm9yIChsZXQgZW50cnkgPSAwOyBlbnRyeSA8IGZpbGVzRGF0YS5sZW5ndGg7IGVudHJ5KyspIHsKICAgIGlmIChmaWxlc0RhdGFbZW50cnldLmZ1bGxfcGF0aC5pbmNsdWRlcygiLmFwcCIpICYmIGZpbGVzRGF0YVtlbnRyeV0uZmlsZW5hbWUgIT0gIkluZm8ucGxpc3QiKSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogICAgY29uc3QgYXBwID0gewogICAgICBhcHBfcGF0aDogZmlsZXNEYXRhW2VudHJ5XS5kaXJlY3RvcnksCiAgICAgIGluZm9fcGxpc3Q6IGZpbGVzRGF0YVtlbnRyeV0uZnVsbF9wYXRoLAogICAgICBwbGlzdDogZ2V0UGxpc3QoZmlsZXNEYXRhW2VudHJ5XS5mdWxsX3BhdGgpCiAgICB9OwogICAgYXBwcy5wdXNoKGFwcCk7CiAgfQogIHJldHVybiBhcHBzOwp9CmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgYXJncyA9IFNUQVRJQ19BUkdTOwogIGlmIChhcmdzLmxlbmd0aCA8IDIpIHsKICAgIHJldHVybiBncmFiTG9naW5JdGVtcygpOwogIH0KICBpZiAoYXJnc1sxXSA9PT0gInVuaWZpZWRsb2dzIikgewogICAgcmV0dXJuIGZpbHRlckxvZ3MoYXJnc1swXSk7CiAgfQogIGlmIChhcmdzWzFdID09PSAiZnNldmVudHNkIikgewogICAgcmV0dXJuIGZpbHRlckV2ZW50cyhhcmdzWzBdKTsKICB9CiAgaWYgKGFyZ3NbMV0gPT09ICJmaWxlcyIpIHsKICAgIHJldHVybiBmaWx0ZXJBcHBzKGFyZ3NbMF0pOwogIH0KICByZXR1cm4gSlNPTi5wYXJzZShhcmdzWzBdKTsKfQptYWluKCk7Cg=="
[[artifacts]]
artifact_name = "unifiedlogs"
filter = true
[artifacts.unifiedlogs]
sources = ["Persist"]
[[artifacts]]
artifact_name = "fseventsd"
filter = true
[[artifacts]]
artifact_name = "files"
filter = true
[artifacts.files]
start_path = "/System/Volumes/Data/Applications"
depth = 15
[[artifacts]]
artifact_name = "script"
[artifacts.script]
name = "loginitems_macho" # No filtering applied
# The script below is the same as the filter script. Its coded in a manner that will work as a filter and normal script
script = "Ly8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvbG9naW5pdGVtcy50cwpmdW5jdGlvbiBnZXRMb2dpbml0ZW1zKCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9sb2dpbml0ZW1zKCk7CiAgY29uc3QgaXRlbXMgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiBpdGVtczsKfQoKLy8gaHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL3B1ZmZ5Y2lkL2FydGVtaXMtYXBpL21hc3Rlci9zcmMvbWFjb3MvbWFjaG8udHMKZnVuY3Rpb24gZ2V0TWFjaG8ocGF0aCkgewogIGNvbnN0IGRhdGEgPSBEZW5vLmNvcmUub3BzLmdldF9tYWNobyhwYXRoKTsKICBpZiAoZGF0YSA9PT0gIiIpIHsKICAgIHJldHVybiBudWxsOwogIH0KICBjb25zdCBtYWNobyA9IEpTT04ucGFyc2UoZGF0YSk7CiAgcmV0dXJuIG1hY2hvOwp9CgovLyBodHRwczovL3Jhdy5naXRodWJ1c2VyY29udGVudC5jb20vcHVmZnljaWQvYXJ0ZW1pcy1hcGkvbWFzdGVyL3NyYy9tYWNvcy9wbGlzdC50cwpmdW5jdGlvbiBnZXRQbGlzdChwYXRoKSB7CiAgY29uc3QgZGF0YSA9IERlbm8uY29yZS5vcHMuZ2V0X3BsaXN0KHBhdGgpOwogIGlmIChkYXRhID09PSAiIikgewogICAgcmV0dXJuIG51bGw7CiAgfQogIGNvbnN0IHBsaXN0X2RhdGEgPSBKU09OLnBhcnNlKGRhdGEpOwogIHJldHVybiBwbGlzdF9kYXRhOwp9CgovLyBtYWluLnRzCmZ1bmN0aW9uIGdyYWJMb2dpbkl0ZW1zKCkgewogIGNvbnN0IGRhdGEgPSBnZXRMb2dpbml0ZW1zKCk7CiAgY29uc3QgaXRlbXNNYWNobyA9IFtdOwogIGZvciAoY29uc3QgZW50cnkgb2YgZGF0YSkgewogICAgdHJ5IHsKICAgICAgY29uc3QgaXRlbSA9IHsKICAgICAgICBpdGVtczogZW50cnksCiAgICAgICAgbWFjaG86IGdldE1hY2hvKGVudHJ5LnBhdGguam9pbigiLyIpKQogICAgICB9OwogICAgICBpdGVtc01hY2hvLnB1c2goaXRlbSk7CiAgICB9IGNhdGNoIChfZSkgewogICAgICBjb25zdCBpdGVtID0gewogICAgICAgIGl0ZW1zOiBlbnRyeSwKICAgICAgICBtYWNobzogbnVsbAogICAgICB9OwogICAgICBpdGVtc01hY2hvLnB1c2goaXRlbSk7CiAgICB9CiAgfQogIHJldHVybiBpdGVtc01hY2hvOwp9CmZ1bmN0aW9uIGZpbHRlckxvZ3MoZGF0YSkgewogIGNvbnN0IGxvZ3MgPSBbXTsKICBjb25zdCBsb2dEYXRhID0gSlNPTi5wYXJzZShkYXRhKTsKICBmb3IgKGxldCBlbnRyeSA9IDA7IGVudHJ5IDwgbG9nRGF0YS5sZW5ndGg7IGVudHJ5KyspIHsKICAgIGlmICghbG9nRGF0YVtlbnRyeV0ubWVzc2FnZS5pbmNsdWRlcygic3VkbyIpICYmICFsb2dEYXRhW2VudHJ5XS5tZXNzYWdlLmluY2x1ZGVzKCJvc2FzY3JpcHQiKSkgewogICAgICBjb250aW51ZTsKICAgIH0KICAgIGxvZ3MucHVzaChsb2dEYXRhW2VudHJ5XSk7CiAgfQogIHJldHVybiBsb2dzOwp9CmZ1bmN0aW9uIGZpbHRlckV2ZW50cyhkYXRhKSB7CiAgY29uc3QgZXZlbnRzID0gW107CiAgY29uc3QgZXZlbnRzRGF0YSA9IEpTT04ucGFyc2UoZGF0YSk7CiAgZm9yIChjb25zdCBlbnRyeSBvZiBldmVudHNEYXRhKSB7CiAgICBpZiAoIWVudHJ5LnBhdGguaW5jbHVkZXMoIi5kbWciKSAmJiAhZW50cnkucGF0aC5zdGFydHNXaXRoKCIvdG1wIikpIHsKICAgICAgY29udGludWU7CiAgICB9CiAgICBldmVudHMucHVzaChlbnRyeSk7CiAgfQogIHJldHVybiBldmVudHM7Cn0KZnVuY3Rpb24gZmlsdGVyQXBwcyhkYXRhKSB7CiAgY29uc3QgYXBwcyA9IFtdOwogIGNvbnN0IGZpbGVzRGF0YSA9IEpTT04ucGFyc2UoZGF0YSk7CiAgZm9yIChsZXQgZW50cnkgPSAwOyBlbnRyeSA8IGZpbGVzRGF0YS5sZW5ndGg7IGVudHJ5KyspIHsKICAgIGlmIChmaWxlc0RhdGFbZW50cnldLmZ1bGxfcGF0aC5pbmNsdWRlcygiLmFwcCIpICYmIGZpbGVzRGF0YVtlbnRyeV0uZmlsZW5hbWUgIT0gIkluZm8ucGxpc3QiKSB7CiAgICAgIGNvbnRpbnVlOwogICAgfQogICAgY29uc3QgYXBwID0gewogICAgICBhcHBfcGF0aDogZmlsZXNEYXRhW2VudHJ5XS5kaXJlY3RvcnksCiAgICAgIGluZm9fcGxpc3Q6IGZpbGVzRGF0YVtlbnRyeV0uZnVsbF9wYXRoLAogICAgICBwbGlzdDogZ2V0UGxpc3QoZmlsZXNEYXRhW2VudHJ5XS5mdWxsX3BhdGgpCiAgICB9OwogICAgYXBwcy5wdXNoKGFwcCk7CiAgfQogIHJldHVybiBhcHBzOwp9CmZ1bmN0aW9uIG1haW4oKSB7CiAgY29uc3QgYXJncyA9IFNUQVRJQ19BUkdTOwogIGlmIChhcmdzLmxlbmd0aCA8IDIpIHsKICAgIHJldHVybiBncmFiTG9naW5JdGVtcygpOwogIH0KICBpZiAoYXJnc1sxXSA9PT0gInVuaWZpZWRsb2dzIikgewogICAgcmV0dXJuIGZpbHRlckxvZ3MoYXJnc1swXSk7CiAgfQogIGlmIChhcmdzWzFdID09PSAiZnNldmVudHNkIikgewogICAgcmV0dXJuIGZpbHRlckV2ZW50cyhhcmdzWzBdKTsKICB9CiAgaWYgKGFyZ3NbMV0gPT09ICJmaWxlcyIpIHsKICAgIHJldHVybiBmaWx0ZXJBcHBzKGFyZ3NbMF0pOwogIH0KICByZXR1cm4gSlNPTi5wYXJzZShhcmdzWzBdKTsKfQptYWluKCk7Cg=="
Development Overview
The artemis
source code is about ~66k lines of Rust code across ~540 files as
of September 2023 (this includes tests), however its organized in a pretty
simple manner.
From the root of the artemis
repo:
/artemis-core
workspace containsthe library component ofartemis
. The bulk of the code is located here/cli
workspace contains the executable componentartemis
. This is very small./server
workspace contains the experimental server component ofartemis
. Its currently very bare bones
From /artemis-core
directory:
/src
contains the source code ofartemis-core
./tests
contains test data and integration tests/tmp
output directory for all tests (if you choose to run them)
From the artemis-core/src/
directory
/artifacts
contains the code related to parsing forensic artifacts. It is broken down by OS and application artifacts/filesystem
contains code to help interact with the filesystem. It contains helper functions that can be used when adding new artifacts/features. Ex: reading/hashing files, getting file timestamps, listing files, etc/output
contains code related to outputing parsed data/runtime
contains code related to the embedded Deno runtime/structs
contains code related to how TOML collection files are parsed. It tellsartemis
how to interpret TOML collections./utils
contains code related to help parse artifacts and provide other features toartemis
. Ex: Decompress/compress data, get environment variables, create a Regex expression, extract strings, convert timestamps, etccore.rs
contains the entry point to theartemis_core
library.
Adding New Artifacts
To keep the codebase organized the follow should be followed when adding a new artifact.
- Artifacts have their own subfolder. Ex:
src/artifacts/os/windows/prefetch
- The subfolder should have the following files at least:
parser.rs
- Contains thepub(crate)
accessible functions for the artifacterror.rs
- Artifact specific errors
Timestamps
All timestamps artemis
outputs are in UNIXEPOCH seconds. The only exceptions
are:
UnifiedLogs
andEventLogs
use UNIXEPOCH nanoseconds.Journals
use UNIXEPOCH microseconds.
If your new artifact has a timestamp, you will need to make sure the timestamp
is in UNIXEPOCH seconds. Though exceptions may be allowed if needed, these
exceptions will only be for the duration (ex: seconds vs nanoseconds).
No other time formats such as Windows FILETIME, FATTIME, Chromium time, etc are
allowed.
Artifact Scope
Currently all artifacts that artemis
parses are statically coded in the binary
(they are written in Rust). While this ok, it prevents us from dyanamically
upating the parser if the artifact format changes (ex: new Windows release).
Currently the JS runtime has minimal support for
creating parsers. If you are interested in adding a small parser to artemis
,
it could be worth first trying to code it using the JS runtime.
An example JS runtime parser can be found in the
artemis API
repo.
However, if you want to implement a new parser for parsing common Windows
artifacts such as Jumplists
then that is definitely something that could be
worth including as a static parser.
When in doubt or unsure open an issue!
Suggestions
If you want add a new artifact but want to see how other artifacts are implemented, some suggested ones to review are:
UserAssist
: If you want to add a new Registry artifact. TheUserAssist
artifact is less than 300 lines (not counting tests). And includes:- Parsing binary data
- Converting timestamps
- Collecting user Registy data
FsEvents
: If you want to to parse a binary file. TheFsEvents
is less than 300 lines (not counting tests). And includes:- Parsing binary data
- Decompressing data
- Getting data flags
Fun fact:FsEvents
was the first artifact created forartemis
. Its the oldest code in the project!
Suggested Helper Functions
The artemis
codebase contains a handful of artifacts (ex: Registry
) that
expose helper functions that allow other artifacts to reuse parts of that
artifact to help get artifact specific data.
For example the Windows Registry
artifact exposes a helper function that other
Registry
based artifacts can leverage to help parse the Registry
:
pub(crate) fn get_registry_keys(start_path: &str, regex: &Regex, file_path: &str)
will read aRegistry
at providefile_path
and filter to based onstart_path
andregex
. Ifstart_path
andregex
are empty a fullRegistry
listing is returned. All Regex comparisons are done in lowercase.
Some other examples listed below:
-
/filesytem
contains code to help interact with the filesystem.pub(crate) fn list_files(path: &str)
returns list of filespub(crate) fn read_file(path: &str)
reads a filepub(crate) fn hash_file(hashes: &Hashes, path: &str)
hashes a file based on select hashes (MD5, SHA1, SHA256)
-
/filesystem/ntfs
contains code to help iteract with the raw NTFS filesystem. It lets us bypass locked files. This is only available on Windowspub(crate) fn raw_read_file(path: &str)
reads a file. Will bypass file lockspub(crate) fn read_attribute(path: &str, attribute: &str)
can read an Alternative Data Stream (ADS)pub(crate) fn get_user_registry_files(drive: &char)
returns a Vector that contains references to all user Registry files (NTUSER.DAT and UsrClass.dat). It does not read the files, it just provides all the data needed to start reading them.
-
/src/artifacts/os/macos/plist/property_list.rs
contains code help parseplist
files.pub(crate) fn parse_plist_file(path: &str)
will parse aplist
file and return it as a Serde Valuepub(crate) fn parse_plist_file_dict(path: &str)
will parse aplist
file and return as Dictionary for further parsing by the caller
Prerequisites
There a few required applications you will need in order to build and develop
artemis
.
artemis
is written in Rust. So you will need to download and install the Rust programming language- Git
- Rust analzyer
- An IDE or text editor. VSCode or VSCodium are great choices. IntelliJ with the Rust plugin also works.
artemis
has been developed on:
- macOS 12 (Monterey) and higher
- Windows 10 and higher
Building
Once you have Rust and Git installed you can build artemis
.
- Clone
artemis
repo at https://github.com/puffycid/artemis - Navigate to the source code
- Run
cargo build
. By default cargo builds adebug
version of the binary. If you want to build therelease
version of the binary runcargo build --release
# Download artemis source code
git clone https://github.com/puffycid/artemis
cd artemis
# Build debug version
cargo build
# Build release version
cargo build --release
Adding Features
Before working on a new feature for artemis
please make sure you have read the
Contributing
doucment. Most important thing is to first create an issue! Highlevel overview
of adding a new feature:
- Create an issue. If you want to work on it, make sure to explictly volunteer!
- Create a branch on your clone
artemis
repo - Work on said feature
- Ensure tests are made for all functions
- If you are adding a new artifact, add an integration test
- Run
cargo clippy
. - Run
cargo fmt
- Open a pull request!
Other Useful Development Tools
List of useful tools that may aid in development.
Testing
artemis
has a single basic guideline for testing:
- All functions should ideally have a test
For example, if you open a pull request to add a new feature and create three
(3) new functions. Your must have a test for each new function (three tests
total). Even if your functions are coded like:
functionA calls functionB which then call functionC
. You must have tests for
each function and not just functionA.
To run tests
# Its recommended to run in release mode for tests. This will speed up the tests
cargo test --release
If you are unfamilar with creating Rust tests. The Rust book and Rust by example have great learning resources.
Integration Tests
If you are adding a new forensic aritfact to artemis
, including an integration
test for the artifact can also be very useful. Writing an integration is a two
(2) step process:
- Create a TOML collection file. This should be TOML collection file that anyone could download and run themselves
- Create a
artifact_tester.rs
file
An example prefetch
integration test:
- TOML file created at
<path to repo>/artemis-core/tests/test_data/windows/prefetch.toml
system = "windows"
[output]
name = "prefetch_collection"
directory = "./tmp"
format = "json"
compress = false
endpoint_id = "6c51b123-1522-4572-9f2a-0bd5abd81b82"
collection_id = 1
output = "local"
[[artifacts]]
artifact_name = "prefetch"
[artifacts.prefetch]
alt_drive = 'C'
prefetch_tester.rs
created at<path to repo>/artemis-core/tests/prefetch_tester.rs
#![allow(unused)] fn main() { #[test] #[cfg(target_os = "windows")] fn test_prefetch_parser() { use std::path::PathBuf; use artemis_core::core::parse_toml_file; let mut test_location = PathBuf::from(env!("CARGO_MANIFEST_DIR")); test_location.push("tests/test_data/windows/prefetch.toml"); let results = parse_toml_file(&test_location.display().to_string()).unwrap(); assert_eq!(results, ()) } }
Our prefetch_tester.rs
file runs the prefetch.toml
file through the whole
artemis
program.
Test Data
If you are adding a new forensic artifact to artemis
, if you can include a
sample of the artifact that can be used for tests that would be very helpful.
Some things to keep in mind though:
- Size. If the artifact is large (10-20MB) then including the sample in the
artemis
repo is unecessary. - Licensing. If you can provide the artifact from your own system that is ideal.
However, if you find the sample aritfact in another GitHub repo make sure that
repo's LICENSE is compatible with
artemis
.
Hypothetical Scenario
Lets say in a future version of Windows (ex: Windws 14) Microsoft decides to
update the prefech file format. You want to
add support for this updated format in artemis
. The process would be something
like:
- Create an issue describing what you want to do. Ex:
Issue: Add support for Windows 14 Prefetch file
- Clone the
artemis
source code - Create a new branch
- Code and add the feature. Since this is related to Windows
prefetch
you would probably be working in:
artemis-core/src/artifacts/os/windows/prefetch/versions/<prefetch version>.rs
- Add tests for any new functions you add
- Add one (1) or more Windows 14 sample
prefetch
files to:
artemis-core/tests/test_data/windows/prefetch/win14/
- Run tests and verify results
- Open pull request
Learning
Some materials that may help you become a bit more familar with the artemis
code and logic.
- Nom
- Artifact format documentation from the libyal project
- Deno. See
runtime
andcore
for examples on embeddingDeno
- See the Deno API for examples and tutorials on learning Deno
Resources for learning Rust:
- Rust by example
- Official Rust book
Nom
Nom is a very popular binary parsing library. artemis
makes extensive use of
nom for parsing all different kinds of binary files. If nom did not exist
artemis
would not exist. If you familarize yourself with nom and how it works
then the artemis
code should be much easier to understand.