Saturday, August 10, 2019

ARTEMIS - Android support for APOLLO

Short Version

Introducing a Python 3 script, with corresponding modules, that extend Sarah Edwards' APOLLO framework support to Android devices.


ARTEMIS (Android Review Timeline Events Modular Integrated Solution)
  1. Python 3 script that parses Android UsageStats XML files for automatic ingestion by APOLLO. 
  2. Add pattern of life analysis in Android devices to APOLLO.
  3. Continue to evolve ARTEMIS so it can parse additional non APOLLO supported data sources for APOLLO ingestion and analysis.
New APOLLO modules for Android devices:
  1.  UsageStats 
  2.  ContextLog 
  3. SamsungMembersDeviceEvents 
  4. SamsungSmartManagerUsageLog 
Converted the original Python 2 APOLLO script to Python 3. Works on Windows 10.
 Full file system extraction of Android device will produce best results.
ARTEMIS, updated APOLLO, and Android APOLLO modules can be found in the following fork/branch: here:
Long Version 

When Sarah Edwards released APOLLO on November last year I was highly impressed by the importance of aggregating specific pattern of life data (PLD) from iOS devices in a timeline format. Since then I have been looking for PLD in Android devices that can provide the same type of insight that APOLLO aggregates for iOS devices. This search motivated me to create the DFIR SQL Query Repo, a UsageStat XML parser, and multiple Magnet Forensic Artifact Exchange custom artifacts. Yesterday as I was driving home from work I thought about how to aggregate all those PLD artifacts in an APOLLO like format. Then it hit me, not a car but the idea of feeding non-sqlite Android files to APOLLO for parsing and analysis. Hence ARTEMIS was born.


If you are not familiar with APOLLO you will be well served to do so. It can be found here. The Python 2 script has been instrumental in many DFIR cases. In order to extend the functionality I had to port it to Python 3. The only issue with APOLLO was that it was coded in Python 2 which will reach end of life by the end of 2019. For reasons unknown to me I couldn't make APOLLO work in a Windows environment even if I had Python 2 installed. I depended on my trusty Apple computer but that required moving data to and from my main DFIR boxes. By coding APOLLO in Python 3 my Windows machines can play nicely with it. Also since I code in Python 3 I could integrated with my nascent ARTEMIS idea.


First order of business was to convert APOLLO to Python 3. Thanks to the Python Modernize project it was trivial to do so. With APOLLO properly converted the next step was to find a way to prepare the largest PLD source in Android for APOLLO ingestion.

One of the main data sources APOLLO leverages for PLD in iOS is the SQLite knowledgeC database. There is no analogous SQLite database in Android. Thanks to Jessica Hyde's research I was made aware that similar PLD information is contained in the UsageStats XML. In order to have APOLLO ingest UsageStats data it had to be converted from XML to SQLite. ARTEMIS purpose is to do this conversion.

Last step was to create a module so APOLLO could parse this new SQLite database. I created a module for that purpose. Also in order to avoid having to call to scripts and move things around manually I use most of the same arguments APOLLO uses so I can have ARTEMIS hand off processing to APOLLO without user intervention assuming both scripts are located in the same directory.


The following images will provide a visual on the results the user can expect ftom the APOLLO/ARTEMIS workflow.

  1.  Have APOLLO and ARTEMIS in the same directory
  2. ARTEMIS will use the YOLO option in APOLLO as default since APOLLO lacks any Android specific arguments. 
  3. Place your Android data sources aggregate in a directory. ARTEMIS, like APOLLO, will not parse forensic images. It will search logical files only.
  4. Place APOLLO modules that support your Android specific SQLite databases. At a minimum include the UsageStats module.
  5. Run After done it will call to finish processing.
The following image shows both scripts in the same directory. Your Android logical files will be located in the Data directory. The Android APOLLO modules are in the Artemis_Modules directory. The modules in the Artemis_Modules directory are a product of the research presented at the SANS DFIR Summit 2019. For details on the data these modules parse see here.

Both Python 3 scripts in the same directory 
ARTEMIS uses almost all of the same arguments as APOLLO.


Execute the following command:
python -o sql Artemis_Modules Data
UsageStats being processed and turned into a SQLite DB
 When ARTEMIS is done with the UsageStats it will call APOLLO to parse the data stores.

Yolo option selected by ARTEMIS
As seen at the end of the previous image the timeline will be found in the apollo.db file. I used the following SQL statement:

datetime(Key/1000, 'unixepoch', 'localtime') as time,
FROM APOLLO order by time asc

End product
It is awesome to see how multiple PLD sources come together to paint a more rich picture of what activity transpired on a device at a particular time.

Future development

A big part of ARTEMIS will be to add support for additional Android PLD sources that are not in SQLite form. In the immediate future I will work on having ARTEMIS automate the conversion of iOS mobile installation log data into a SQLite format for APOLLO ingestion.

Obligatory WARNING!!!!!

The output of these scripts is for lead purposes only. Verification by you is not optional. Be aware that there is always danger involved with timestamps from multiple disparate sources. Always verify the provenance of the data and how the timestamps related to each other.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Friday, July 26, 2019

Android - Samsung Traces of Deleted Apps

Short version

The following Android application artifacts were presented as part of the Traces of Deleted Apps presentation by Christopher Vance and Alexis Brignoni at the SANS DFIR Summit 2019 in Austin, Texas on July 26, 2019. Presentation slides will be available at
  • Samsung Members - Keeps a list of an app's display name, package name, is it system, and last used time in the following location, database, and -> table:
data/ -> android_app 
The app also keeps information on the following events: type (network, install, power, alerts), type values, and creation time. These are kept in the following location, database, and -> table:
data/ -> device_events
  • Samsung Smart Manager - Keeps a list of apps that have crashed during use. These are kept in the following location, database, and -> table:
data/ -> crash_info, excluded_app 
The app also keeps a list of app usage times to include package name, class name, start time and end time. These are kept in the following location, database, and -> table:
data/ -> usage_log
  • Samsung Context Log - Keeps a list of app usage to include timestamp, time offset, app id, app sub id, start and stop time, and duration in milliseconds between start and stop times. These are kept in the following location, database, and -> table:
data/ -> use_app
These artifacts keep the previously described data even after an app is deleted from the device.

Slight addition to the short version

The artifacts described previously are tied to factory installed apps on Android Samsung devices. These can be used for pattern of life analysis, app usage timelining, as well as indicators of app presence on a device after the app has been deleted.

The SQL queries used to extract the data can be located at the following URL:
 Within the DFIR SQL Query Repo go to the following locations:
  • Samsung Members
These will be available in Magnet Forensics Custom Artifact Exchange after final approval.

Longer addition to the slight addition to the short version

The following screenshots are examples of the type of data contained in these artifacts:

  • Samsung Members

com_pocketgeek_sdk_app_inventory.db -> android_app

App Inventory
com_pocketgeek_sdk.db -> device_events

Events. Notice the Package Installed Event.

  • Samsung Smart Manager
sm.db -> crash_info

Crash apps and time of crash
lowpowercontext-system-db -> usage_log

  • Samsung Context Log
ContextLog.db -> use_app

Notice how the data types being store are almost the same as the ones kept by usage stats.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Wednesday, June 12, 2019

Android - Samsung My Files App

Short version

Samsung mobile devices keep a list of stored media files in the following location and database:
These same devices also keep track of recent accessed media in the following location and database:
The following queries at can be used as templates to extract data from the aforementioned databases.

  • FileCache.db
    • Table: Filecache
    • Fields: storage, path, size, date, latest_date
  • myfiles.db
    • Table: recent_files
    • Fields: nane, size, date, _data, ext, _source, description, recent_date
Long version

Samsung devices come preinstalled with the Samsung My Files app. The app can also be used on other branded devices by download and install of the app via the Google Play store.

Samsung My Files app
The app description tells us the main software features.
[Key features]- Browse and manage files stored on your smartphone, SD card, or USB drive conveniently..Users can create folders; move, copy, share, compress, and decompress files; and view file details.
- Try our user-friendly features..The Recent Files list: Files the user has downloaded, run, and/or opened.The Categories list: Types of files, including downloaded, document, image, audio, video, and installation files (.APK)
Stored files analysis

The My Files app directory data resides in the data/data/ directory as seen in the next image.

App directory contents
Within this directory the SQLite FileCache.db file can be found. In the FileCache table one can find information on stored media to include path, size in bytes, date timestamp, and latest timestamp.

A simple query can be produce to extract this data. One can be found here.

Recent files list analysis

Within the same database directory one can also find the SQLite myfiles.db file. The recent_files tables keeps information on recently accessed files as explained in the app description from the Google Play store. This table tracks of file name, size in bytes, data, path, extension, source, description and recent date.

A simple query can be produced to extract this data. One can be found here.

Why does this matter?

A list of files as recorded by the app can give us clues on what files once existed on the device if these files were deleted before the usage of the My Files app. The utility of the recent apps list is even more apparent since we can correlate particular real world events with the last usage of pertinent media on the device. User generated artifacts should be of interest to the analyst, even more so when they intersect with other parts of the case we are working. Only by knowing that such artifacts exist can we make use of them.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Tuesday, June 11, 2019

Android - Predictive text exclusions in Samsung devices

Short version

Samsung keyboard predictive text exclusions are located in the following location and database:
The following query at the
can be used as a template to extract text exclusion entries from the RemoveListManager database.
  • RemoveList
    • Fields: removed_word, time_word_added
SwiftKey predictive text exclusions are located in the following location and text file:
  • Blacklist
    • Text file contents are composed of one excluded word per line. 

Long Version

The following discussion shows how excluded words from the Samsung keyboard's predictive text are stored in Samsung Android devices. 

Predictive text options in Samsung Android phones
Predictive text is an Android feature that learns the user's most used typed words and presents them as options for autocomplete. For example if I type the city name "San Juan" with regularity on my device the next time I start typing "San" the predictive text option will volunteer the full name "San Juan" as an option to complete the word for me. Auto-predictive text saves the user time since I can type the word full word in just 3 taps (San + tap on the suggestion or tap spacebar for auto complete) instead of the 8 taps needed for the full name to be spelled out.

What happens when the keyboard is giving you a suggestion for a word that you don't want constantly for a set of initial letters? Imagine that currently instead of typing "San Juan" constantly you find yourself typing "San Lorenzo" instead since you moved to a new city. Every time you type "San" you get the suggestion "San Juan" instead of "San Lorenzo". By long pressing the suggestion box the keyboard gives you the option to stop suggesting the pressed word moving forward.

What happens to the long pressed word that will now not be suggested anymore? The same process is used for text exclusions for the SwiftKey keyboard app. Where does these excluded or blacklisted words reside? Why would finding these items be of importance to the forensic analyst?

Samsung Keyboard Analysis

On Samsung Android devices data related to keyboard configurations reside in the data/data/ directory. The following image shows the contents of the aforementioned directory.

Database folder exist when a blacklisted word exists

Notice in the image above how the databases folder is highlighted. This directory did not exist until I added a word to be excluded on the Samsung keyboard. Within this directory resides the SQLite database named RemoveListManager. Within the database the RemovedList table keeps the excluded word list.

In the previous image the word EMBASSIES was excluded. Notice the added time. While doing testing the actual time of addition was 2019-06-11 09:13:51. There is a difference of 4 hours. It is my assumption that the date shown is UTC time in human readable format. This underlines how important it is to test your conclusions. On your case work it is key to duplicate the environment by getting a similar phone to the original and do your own testing. 

Samsung SwiftKey Analysis

Just like the Samsung Keyboard, the SwiftKey keyboard app keeps pertinent data in the data/data/ directory. 

As seen in the previous image the user directory is where the excluded word list resides. The following image shows the contents of the blacklist text file.

For this analysis the creation and modified dates can be used to show when the list was created first and modified last. Dates for words excluded that are between the first and last entries on the list lack a timestamp or a way to infer it.

Why does this matter?

Excluded words are voluntary user generated events. When a user decides to exclude a word it is because it constantly gives suggestions to a list of words that are constantly typed. What would a list of excluded words that mostly contains terms related to child exploitation tell the digital forensic analyst? What can the analyst infer the user was typing? When was the exclusion made? Can the timestamp be correlated to a location by the use of another type of artifact on the device? User generated events tend to have relevance to our analysis and should be sought out and aggregated. We can make out the forest by getting at all those trees.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com. 

Monday, June 3, 2019

Finding Badoo chats in Android using SQL queries and the MAGNET App Simulator

Short version

The Badoo Free Chat and Dating app keeps user generated chats in the following SQLite database:
The following queries at can be used as templates to extract chats from the Badoo database:

  • Messages
    • Sender name, recipient name, chat message, create time, modified time, server status, payload.
  • User data
    • User ID, username, gender, age, user image url, photo url, max unanswered messages, sending multimedia enabled, user deleted.
By using the MAGNET App Simulator the messages can be easily seen in their native format. The simulator can be downloaded from here:
Long version

The Badoo application is a chat and dating platform for Android and iOS. The app website claims a to have over 425,000,000 users and counting.

Large install base
The app seem to be fairly popular in the Google Play store with over 4 million reviews.

The following analysis came to be due to a request from a digital forensics examiner not being able to parse the app data using commercial mobile forensic tools. I procured consent from my colleague to use the data sets in the creation of the queries and accompanying blog post. With that being said I will obscure usernames and chat content in the data sets due to the fact that they are in French, which I do not speak, and I want to avoid publishing something without knowing what it says.

Analysis via SQL queries

The data is kept in the SQLite ChatComDatabase file located in the userdata/data/ directory. Within the database there are 2 tables containing data of interest.

This table contains the user IDs, gender, user names, age and profile photo URLs for all the users that chatted with the local Badoo app user. It is of note that the local app user information is not contained within this table. To identify the local user information I emulated the app with the Magnet App Simulator (more on that later) and was able to see the name and age of the local user.

Username obscured
With that information on hand I processed the app directory with Autopsy and did a text search for the user name which had a hit in the following path and filename:
Note the base64 formatted filename. Using Cyberchef  it was easy to convert the base64 filename to ASCII as seen in the next image.

By looking at the contents of the settings file with Autopsy the following data can be obtained regarding the local user:

  • Username
  • Birth date
  • Telephone numbers
  • Weight & height
  • Body type
  • Workplace
  • Sexual orientation
  • Political orientation

It is of note that this user generated data surely would vary depending how much the user adds to their profile. Further testing would be required to confirm.

Regarding the user data of individuals that exchanged messages with the local user the User data query can be used to get the following column values as seen in the next image.

This table contains the user IDs, timestamps, and chat messages. The chat messages are contained in a field labeled as payload that holds them in JSON format. It is really easy to extract them using SQLite's the json_extract function. For an example on how to use the json_extract function see the following post on Slack app messages parsing:
 Since the messages are referenced by their user IDs a join select of the messages and conversation_info tables had to be used to determine the sender and recipient names. To do this the select query had to take into account that the local user information was not found within the conversation_info table. This fact made it difficult to join the tables by user_ids since the most important user (the local user) did not have user name data to join. To overcome that obstacle I used two separate query conditions.

  1. Left join conversation info on sender_id = user_id
    This condition gave me all sender user names to include null rows that had data but no corresponding user name (i.e. the rows for the messages sent by the local user.)
  2. Left join conversation info recipient_id = user_id
    This condition gave me all recipient user names to include null rows that had data but no corresponding user name (i.e. the rows for the messages received by the local user.)
With these two queries on hand the idea was to join both selects by each row's unique ID. This would guarantee that there wouldn't be a one to multiple selection which would cause rows to be unnecessarily repeated. Then a simple order by created time would put all the messages in their proper order. I have also added a ifnull condition to the query so that every null username value will read 'local user' instead. The query and the result looks as follows:

To see the full query see the previously provided link

It is of note that I have added the payload data field with all the JSON content in it. This was important since some of the JSON content might not be a chat message but data regarding a shared image. When the chat_text field is null in the query results the examiner can simply go to the contents of the payload field to determine additional information like upload ID, expiration timestamp and the URL of the image itself. In the preceding image notice how the chat_text null field rows say "type":"permanent_image" in the payload field.

I plan to have these queries submitted to the MAGNET Artifact Exchange Portal soon.

MAGNET App Simulator
Main screen
As stated previously I used the simulator to identify local user data by visualizing the app data through the app itself. The process is simple and straight forward. 

The first thing to do is extract the app APK from the device.

Load the APK

Then load the app directory.
Load app directory

The simulator brings up an Android instance within VirtualBox, installs the APK, and injects the app data into this new virtualized app instance.

Installing, importing, & injecting

The results are incredible.
Chats in viewed with the app itself, as intended

This analysis was interesting to me for a couple of reasons. The first one underlines the importance of always doing a manual visual check of what apps are available in our extractions and of these how many are parsed by our tools. The difference requires manual attention since the most important piece of data might reside where is not to be found readily. The second reason is how simulation or virtualization of apps does not substitute manual database analysis and that both techniques can and should be used together to guide a deeper analysis of the application data. Without the combination of both techniques the rich repository of local user data might have gone unnoticed since it wasn't accessible in the databases nor in the virtualized screens.

To end I would like to thank not only those who contribute to the DFIR field with tools, scripts and data sets but also those who reach out to ask questions because they want to learn and grow. Truly there is no better way to learn that by trying to fill the gaps of things yet to be known.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Friday, May 31, 2019

Android SystemPanel2 - App usage tracking

Short version

The 3rd party app SystemPanel2 keeps timestamped system wide app usage statistics in the following database:
The following queries at can be used as templates to extract information from the SystemPanel2 database:

  • History Master
    • Timestamps start & end, battery data, CPU usage, screen usage, cell and WiFi signal strength, CPU clock, and load information.
  • History Process
    • Timestamps start & end, PID, process name, type, CPU time, total CPU time, and foreground time.
  • History Foreground
    • Timestamps start & end, interval, process name, type, battery usage, and CPU usage
  • History Power
    • Timestamps start & end, name, power use, flags, wake count, and wake time.

Long version

Since Sarah Edwards came out with her earth shattering iOS analysis of the KnowledgeC database and then her industry leading APOLLO framework I have been obsessed with finding ways of getting similar data sets from Android devices.

To this end one of the most enjoyable research projects I embarked this year involved working on some scripts that parse the UsageStats and Recent Tasks XML data stores in Android. These scripts took as foundation the incredible research done by Jessica Hyde. They were usability enhanced by the magical python GUI work of Chris Weber. This particular work for Android is of importance due to the fact that these XML files reside in most, if not all, Android devices. Still I knew there had to be better captures of app usage via third party apps due to some work I had done last year on the Android Ccleaner app.

This blog post will be the first of a series that will look into pattern of life, app usage and system data that can be extracted from Android third-party apps. Some of these apps are user installed via the Play Store but others will come by default on branded devices. Some of these apps will require root access to extract the data while others will spill the usage secrets via plain and simple ADB. Here is the first of these...

Testing Platform

For my testing platform see here.
For this particular post a rooted Samsung S7 Edge was used.


The best explanation of what data this app records can be found in their Google Play store description. It read as such:
SystemPanel is a tool to let you view and manage just about everything possible about the goings-on of your device and visualize it in an easy-to-understand graphical format. 

Features include:

* Show active apps, record app battery, CPU, and wake lock usage over time to show potential battery drain issues
* Draw plots showing how you used your phone over time and how much battery disappeared as a result
* Analyze recent battery consumption and device wakeups (wakelocks), showing potential problem apps
* Manage installed apps, backup app APKs, uninstall apps, and re-install archived versions
* View apps categorized by the permissions they require
* Disable system packages [ROOT required]
* Disable individual services of apps (e.g. OTA updates) [ROOT required]
* Browse all the technical nitty-gritty about your phone

This app uses Accessibility services. SystemPanel's "Usage" feature can optionally use an "accessibility service" to show you how much time you're spending in each app on your phone, and when you use them throughout the day. This is useful for those with addiction disorders (and/or their parents or legal guardians) to avoid addictive use of the device/specific applications. Use of this service is optional, and like the rest of SystemPanel, no collected data is sent from the device, it is only displayed to the user.
It goes without saying that the amount of data this app tracks is incredible. The data is kept in the sqlite SystemRecord.db file located in the userdata/data/nextapp.sp/databases/ directory. Within the database there were 4 tables housing the data of interest.

History_Master Table

This table contains information on timestamps, battery data, CPU usage, screen usage, cell and WiFi signal strength, CPU clock, and load information.

A simple scroll down the results shows how when a device is in use the screen usage, load, and CPU stats then to be high while battery charge goes down (assuming is not plugged in.) In my test data it was easy to see how most heavy usage took place during the daytime hours. It would be trivial to export the data into Excel to create usage graphs for all the parameters being tracked. This is true of the data in all the tables. Each row of data in the table has a 7 second interval between the start and end timestamps.


In this table we get out first set of data that is related to both app bundle ids and Android OS processes.

As seen on the previous screenshot the bundle ids have a type classification of 2 where as non reverse URL designations have a type classification of 1. Each row of data in the table has a 3 second interval between the start and end timestamps.


The table name is self explanatory.

Is is of note that the time interval between the start and end timestamps is variable and it is contained in the interval row. The interval values are in seconds.


Power usage aggregation by app / process.

Each row of data in the table has a 7 second interval between the start and end timestamps.

So what?

What use would such data have? For starters it can tell us not only when a device was in use but also what apps where the ones and and which ones weren't in 3 & 7 second time-frames. Was the screen on (higher energy consumption) or not? Was the phone being charged at a particular time? Was the device on or off? How long was the app used for? The best part is the knowledge of when all of these events were happening. The forensic applicability of such data cannot be overstated. To that end I will be submitting these queries as custom artifacts to the Magnet Forensics Artifact Exchange.


The preceding analysis of a third-party app demonstrates again the need to always manually review the app directories for items our forensic tools might not be aware of yet. A few extra minutes of work can yield incalculable benefits.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Friday, March 8, 2019

UsRT - Graphical interface for Android Usagestats and Recent Tasks XML parsers.

Introducing UsRT

Thanks to the hard work of Chris Weber (@RD4N6) we now have a way to parse the essential data contained in the Android Usage Stats and Recent Tasks XML files through a graphical interface. Like Eric Zimmerman says it is agent proof. Chris took my scripts, based on the research done by Jessica Hyde (@B1N2H3X), and made them accessible to all. Point and click goodness.

The application can be run as an executable (UsRT.exe) via the provided installer or through the python scripts directly. The installer has all dependencies included and is the easiest and fastest way to use the parser.

For details on the original research that motivated these scripts and the interface see Jessica Hyde's research at the SANS DFIR Summit 2018. For details on the parsing scripts see my previous blog posts for Usage Stats and Recent Tasks.

Script and installer links at the end of the blog post.

Features for Usage Stats:
  • Case information fields

  • Visual listing of files as they are processed in the left bottom corner of the interface

  • Rows and columns format with the ability to hide columns and select all rows, check rows or unchecked rows.

  • HTML reporting

  • Ability to open already processed cases through the application generated case json file.
  • Included Read Me file that has a quick overview on usage with related screenshots. The Read Me can be accessed via the Help menu options.
Features for Recent Tasks:
  • Same features as Usage Stats with the addition of the recent images and snapshot fields. Pressing on the images will show them in your system's default image viewer. HTML reporting include images as well.

Repository and installer

To get the scripts go to the following repository:
The installer is in the same repository in the release tab. 

As said at the beginning of the post I am indebted to Jessica Hyde for doing the original research and to Chris Weber for putting all work an effort to maximize the use of the parsing scripts by making an awesome graphical interface for them.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Saturday, March 2, 2019

iOS Bplist Inception

Update 03/21/2019:
Script now decodes NSdata contents. See details at

Short version:

Python 3 script that export compound bplists from a specific field on a iOS knowledgeC database, extracts the internal bplist and creates a triage html report of its contents. Two versions are provided, for iOS 11 and iOS 12, due to a slight difference on how the internal bplist is referenced within the external that holds it.

The scripts can be found in the following location:
It's recommended that you load these plists into your viewer of choice to examine them directly.

Long version:

Like most DFIR things lately this one also started with Phill Moore. He reached out to the community on the following:

Since I've been on a data parsing binge of lately I was happy to try and assist. As I was reading the replies to Phill's tweet I was reminded of how, of all the data structures utilized by Apple products, bplists are one of the most prevalent. So prevalent that these can be contained within SQLite databases and they themselves contain other bplists within them. Total data storage inception. At this point there was no doubt...

Thanks to kind souls like @i_am_the_gia@ScottVance, and others who will remain anonymous; we got test data to see if we could do the following:

  1. Export the bplists intact from the SQLite DB.
  2. Extract a bplist (clean) from the bplist that holds it (dirty.)
  3. Access the clean bplist and create a file that could be used in forensic tools for analysis.
  4. Generate a triage report of clean bplist data contents to easily evaluate relevance before importing to forensic tools.
There are many tools that let us view the contents of bplists but when these are nested in such a way getting to the internal content requires some manual work. Like any and all examiners the world over manual work is just the universe telling you there is a need to automate and scale. 

The database selected for our testing was the iOS knowledgeC database. I highly recommed everyone reads Sarah Edwards' article on it, THE article on it. By looking at the Z_DKINTENTMETADATAKEY__SERIALIZEDINTERACTION field within the ZSTRUCTUREDMETADATA table can see how these bplists look when nested.

Notice how there are two bplist headers in the same SQLite database content. 


Exporting the data was a straight forward action. Regular SELECT and assigning the content of the field to a variable that would be written to a file. For this to work the receiving file has to expect binary content. As seen in the next image the extracted bplist are named in the following convention:
  • D/C = Dirty or clean There is nothing wrong or dirty about the shell bplist. It is a shorthand in opposite to the internal bplist which I called clean after extraction due to a lack of its bplist shell. 
  • Z_PK = The field name in the table that contained the primary key for the row that contained the exported bplist.
  • Numeric value = Integer contained in the Z_PK field for the row that contained the exported bplist.

By establishing this filename convention the examiner can easily backtrack to the proper row from the target table if additional fields are of interest or if there is a question on the validity of the exported bplist.


Now that we had exported the bplist we had to get to the clean one in a automated way. Thanks to @firmsky I was reminded of an article by Sarah Edwards on the use of ccl_bplist for the parsing of NSKeyedArchiver bplists in Python. These bplist objects are beyond the scope of this blog but just know that I am grateful that Alex Caithness came up with this module that saved me from experiencing a painful headache. You can find this great module here:
With this module in hand and some test data we figured out that:
  1. In iOS 11 one has only to deserialize the bplist at the root which gives you the clean bplist.
  2. In iOS 12 one has to desiralize the bplist at the level since the clean bplist is contained within it.
The previous was a long way of saying that in iOS 11 the following key piece of cll_bplist function
CleanBplistFile = ccl_bplist.deserialise_NsKeyedArchiver(DirtyBplistFile)
would give you the clean bplist ready to write out where as the following code
ns_keyed_archiver_objg = ccl_bplist.deserialise_NsKeyedArchiver(DirtyBplistFile)
CleanBplistFile = (ns_keyed_archiver_objg[""])
would give you the clean bplist after accessing the portion. It would be good to have further confirmation that these type of incepted bplist truly vary per iOS version and that is not only a crazy coincidence of the the data sets we had available.

Originally the purpose of this exercise was to find a way to easily extract the clean bplists in order to import them into forensic tools with minimum effort and no manual extraction. It became clear that a triage report was needed when one of my data sets contained 1565 extracted bplists. Be aware that the script developed will keep both the dirty and clean bplists in separate folders within a timestamped directory. In this way one can backtrack the whole process for validation purposes.


With a triage report that shows the content one can decide which set of bplists should be drilled down more or just retained due to work or case relevance. The fields on the html formatted report are the following:
  • Filename = Same format as stated before.
  • Intent Class = This is a value taken from a field in the table where the dirty bplists where stored in the knowledgeC database. This value is key cause it gives you a clue of the purpose of the contents of the bplist.
  •  Intent Verb = Another value taken from one of the table fields. Further description of bplist purpose and/or type of content.
  • NSstartDate = Time stamp.
  • NSsendDate = Time stamp.
  • NSduration = Float value.
  • NSdata = Binary data store of activity.
Since the report is a triage report the NSdata values are just a string representation of the binary values in it. Although it contains many non human readable characters it is pretty easy to key in on those ASCII values that one can easily read. The report is a testament to my ignorance on how to convert these values to something more pleasing to the eyes, but for triage purposes that help the examiner decide what to process with a forensic tool further it is perfect. Some of the values can cleaned up a little with UTF-8 decoding but many, especially those that contain a lot of data, are not.

The next picture is an example of the report format. The particular data in the report was shared with the condition that it would not be share hence the redaction of it.

It is up to the reader to test it out and discover for herself what awesome data resides in these structures. Things that are, things that were in one form and changed to another, and things that are no more.

Future work

I was surprised by the amount of data contained in just one field from one table in one database. I can only imagine what relevant data resides in incepted SQLite held bplists in other tables and other databases. The next step is to evolve the script so it can extract any bplist blob from any SQLite table and generate dirty and clean instances as needed with complementing reports for triage. A key part is to better better understand how the NSdata fields work to see if anyone in the community knows how to parse them.  If only the days had more hours and our bodies less need for sleep.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Tuesday, February 19, 2019

Android Recent Tasks XML Parser

This post is a continuation of my last blog post where I introduced a simple parser for the Android usagestats XML files.
In this entry I am introducing a parser for the Android recent tasks XML files. Like the previous parser it is based on the research done by Jessica Hyde that she presented at the SANS DFIR Summit 2018. You can see her excellent presentation here:
YouTube: Every Step You Take: Application and Network Usage in Android 
The presentation slides, in PDF format, can be found here:
PDF Slides: Every Step You Take: Application and Network Usage in Android 
As explained in the presentation the Recent Tasks XML files record the following activities for recently used apps:

  • Task ID number = Used to correlate snapshot and recent image files.
  • Effective UID = App identifier.
  • First active time = Timestamp in millisecond epoch time.
  • Last active time= Timestamp in millisecond epoch time.
  • Last time moved = Timestamp in millisecond epoch time. 
  • Affinity = Bundle ID name.
  • Calling package = Bundle ID or process that called the referenced recent task.
  • Real activity = Gives information on app usage at time of recording and snapshot creation.
These XML files are located in the following directory:
In addition to these XML files, recent tasks can produce snapshot images as well as recent images. Details about these are contained in the previously referenced presentation. These images can be found in the corresponding directories:

In order to leverage the data contained in these XML files and images I made a parser in Python 3 that takes the XML information and puts it in a SQLite database for ease of querying. The script can be found here:

The script has been tested and found to be accurate on my own data sets. Not all recent tasks will contain all data events or related images. Additional testing and validation of the script is humbly requested and more than welcomed.

Script usage

1. Extract from your Android source device the three directories mentioned previously. Extraction should be logical and not contained forensic tool generated recovered items like deleted and/or file slack files.

2. Place the script and the noimage.jpg files from the repository in the same root directory as the extracted directories.

Have this before running script.
3. Run the script with no arguments.

Script is done.
4. When completed the script will generate two files, a SQLite database named RecentAct.db and a report file named Recent_Activity.html.

What you should see after a successful run of the script.

Note that the RecentAct.db SQLite file will contain two fields populated with all the XML attributes in JSON format. The analyst can run a query using JSON_extract to custom generate queries with any of the attributes within the XML.

5. Open the Recent_Activity.html report.

Sample report entry.
For every recent task there will be a table with pertinent information as well as the snapshot and recent image files that correspond to it. To view the images full size just click on them. Be aware of the importance of the creation times of these image files within the source media. For details see the presentation previously mentioned.

It is of note that not all recent tasks, in some of my test data samples, had corresponding images or full sets of attributes. When a recent task lacks corresponding images the script will reference the file.

Missing image and missing attributes.
For missing attributes the report will state 'NO DATA' and/or 'NO IMAGE' in the Key and Values columns as needed. Be aware that the SQLite database has all attributes in JSON format for custom query generation.


I want to thank again Jessica Hyde for her research and for making the community aware of these artifacts. Hopefully this script can make it easier to give much needed context to these images and apps whose value might not be found anywhere else on the source device.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.