Wednesday, November 20, 2019

Realm database storage primer for digital forensic examiners

Realm databases are non-relational storage structures that are being used in place of the traditional SQLite stores in many newer/updated mobile apps. As examiners we are faced with locating and parsing these newer data stores. As expected there is always a lag between newer technological applications and commercial forensic third party support. The purpose of this small primer is to enable you to find these Realm databases and access their contents.

Analysis Tools


What is a Realm database?

Some time ago I was made aware of a popular app that all of a sudden was not being properly supported by our usual forensic tools. When one of my colleagues told me that the data was contained within a Realm database I had no idea what he was talking about. A quick Google search lead me to the developer's webpage, realm.io.

Alternative to SQLite. Interesting...
Realm is an alternative to the commonly used relational SQLite databases we are used to. In this model, called NoSQL, data is saved as objects. Imagine a database cell where you can store and manipulate not only values but a list of values. The way the Realm databases accesses these values makes apps that use this model faster than traditional SQLite stores. For an more detailed explanation of Realm capabilities, like zero copy, live objects, MVCC, and ACID, see here. Like all things in life there are pros and cons to everything. I leave the reader to Google search why a developer would decide to build an app store on Realm or SQLite if interested. For our purposes we just want to find these databases and be able to see their contents.

Sample app that uses a Realm database

Usually I work on apps from the Google Play store to illustrate the extraction and parsing of data. In this case I used a simple Android project whose only purpose is to illustrate the input and output of data from a Realm database. Sadly the generation and extraction of sample data using a fully working app takes time and effort that I currently don't have. Using this simple project and an Android virtual machine I could do the generation an extraction of data in minutes for illustration purposes.

The Android project was made by Dheeraj Andra and it is a simple app that takes as input a name and age. These values are shown to the user on screen after being saved in the realm database. Links to the project are included above. To generate the APK needed to run the app in my Genymotion virtual machine I used Android Studio. With the APK in hand I ran it on my virtual machine. The following is an image of the app in use:

Sample Realm app
At the top the user enters values and these are shown back at the bottom part of the app.

Storage location

As any Android app the data of relevance is contained within the application directory. With SQLite databases the data is usually contained in the databases directory. By contrast this Realm app, as well as others I have come across lately, has no databases directory.

Where is the databases folder?
The data we seek resides in the files folder and the file type we need to extract are the ones that end in the .realm extension.

Get the .realm file!
As explained in some of the links above Realm databases have no need for write ahead logs or rollback journal files. In this example extracting the default.realm file has all the data we seek.

How to view Realm database contents

What do we do with the file after extracted? The database developers have a free tool called Realm Studio. Open the .real file with Realm Studio and you will be able to browse the contents.

Values
As the app has more classes and more data it can be useful to export the contents in JSON format for further analysis. It goes without saying that when an examiner does transformations of the data for parsing and analysis great care has to be taken to validate every single piece of resulting data. The farther we move away from the original format the closer we have to make direct links back to it.

In the previous image the bottom part of the screen shows the contents of the sample realm database while the upper part is the same data in JSON format.

Recently (November 19, 2019) Cellebrite announced that their Database viewer tool product had Realm database support.
I expect other vendors in the space to follow suit. This is important because I forsee Realm databases, as well as JSON and protobuf, to be common in the near future as mobile app data stores.

Last but not least remember that having multiple ways of looking at a data set is important for validation purposes.

As always I can be reached on twitter @AlexisBrignoni and via email 4n6[at]abrignoni[dot]com.



Monday, October 28, 2019

Recreate Android apps, folders, and widget screen positions from a forensic extraction.

Short version

Android 9 uses the launcher.db sqlite database to keep track of where on the device screens are the app, folders, and widget shortcuts located. For stock Android the database can be found in the following directories depending on the available launcher:
/data/data/com.android.launcher3/databases/ - Quickstep Launcher
/com.google.android.apps.nexuslauncher/ - Pixel Launcher  
The relevant forensic data in the table named favorites is as follows:

  • Type of shortcut - Icon, folder, widget, etc...
  • Unique ID - Integer value
  • Name
  • Modified time - Epoch timestamp in milliseconds
  • Icon image blob - PNG file

Android home screens can be recreated using the following python3 Android Launcher DB parser script:
https://github.com/abrignoni/Android-LauncherDB-Parser/
Here are the shortcuts as viewed via the device's screen:

Android screen + shortcuts
Here are the shortcut positions and metadata as recreated via the use of the script:

Upper screen

Lower screen

It is of note that not all icons are contained within the launcher.db file. Additional details in the long version of the blog post.

Testing and analysis platform

Macbook Pro 14.1
Genymotion Android VM - Pixel 3 XL Stock Android (Quickstep & Nexus launchers)
SQLite Browser
Python 3 - Numpy + Pandas

Long version 

A few weeks ago I participated in a capture the flag that included an Android device as part of the exercise. One of the questions wanted us to identify the name of a shortcut in a precise screen location within the Android image. As I looked for the answer that made me think on practical uses for this information. Why would anyone care where and on what screen/s are the shortcuts for apps, folders, and widgets on a mobile device? What could that tells us about our case that we would need? What forensic purpose could there be?

On many Windows OS cases showing what directories, LNKs, and files were located on the desktop painted a clear picture of what the user deemed important enough to have easy access to. It speaks of usage, convenience, and to how the user decides to aggregate data. Can the same be said of Android home screens? I believe so. Consider that folders on Android home screens are created by the user long pressing an icon/shortcut and dropping it on top of another shortcut. The name of the folder is user generated if it is not Unnamed Folder. A lot of user activity to place the items where they are. Same thing with widgets. There is a lot of user interaction to generate a widget and place it on the screen. What if these interactions were timestamped?

As stated in the short version at the top the data resides in the /data/data/com.android.launcher3/databases/ and /com.google.android.apps.nexuslauncher/ directories on stock Android devices. The control of how these screens operate is the responsibility of launcher software.

Google Play Store - Launcher Apps
As seen in the previous image, stock Android launchers are not the only game in town. This blogpost limits itself to two the stock types but the device you might be looking into could use any of these other ones. Further research is needed to verify how these other launchers keep its data. My assumption is that to be compatible with Android the databases will most likely be the same as the stock versions. This is something that would need follow up.

Launcher.db

Within the path the database of interest is a sqlite database named launcher.db. The favorites table has all the relevant data in regards to position of items on screen as well as when these items were placed in their locations.

The following image will show the most pertinent fields in the table.

Relevant launcher.db fields
To better understand the data the following image shows the main divisions of an Android screen:


Blue area = Home screen. User can swipe left or right to access more screens.
Red area = Home bar. Persisten on all screens. Only has 5 spaces available.

Purpose of storage data per field:

  • _id - Unique identifier for each app, widget, and folder. Integer value.
  • title - Name of the app shortcut or folder. Widgets do not keep a name in this field.
  • container - Number identifies the type of screen. 
    • -101 - Home bar. Bottom row that does not change as the home screens change when you swipe left or right.
    • -100 - Home screen container number. If you have more than one screen that will be identified by the value in the screen field.
    • Positive integer values - These map to the _id number in the previous field. This is important because it shows the items in question reside in a folder with the corresponding _id number.
  • screen - Identifies the homescreen where the apps will be shown. Everytime a new screen is made the number increments by one. The exception is the home bar (container -101.) Apps in the home bar are limited to 5 and each position will be identified with a 'screen' number.
  • cellX and cellY - Used to identify the location of the item to be place on screen. It is a 5x5 grid on the home screen and 1x5 for the home bar.
  • spanX and spanY - Used to determine the ending location of the item on screen. If it is a widget that covers a 3x3 space the combination of cellX and cellY for the starting point and the spanX and SpanY as the end points will allow the mapping of items that span more than one space of the screen.
  • itemType - Used to determine what type of item is on screen. So far I've identified the following:
    • 0 - App
    • 2 - Directory / Folder
    • 4 - Widget that spans more than 1x1 space
    • 6 - Widget spans a 1x1 space
  • icon - Data blob that contains a PNG of the item to be displayed. Be aware some items, like folder and some widgets, might not have data in this field.
  • appWidgetProvider - Widgets do not have values in the title field. This one indicates the bundle id for the app that populates the widget data.
  • modified - Timestamp in epoch milliseconds that tells you when the placement of the icon on screen last occurred. If the user moves the location of an item the modified time changes to reflect when the move happened.
By understanding these relationships we can recreate the mapping of all items. For this purpose I created a python 3 script that can be located here:
https://github.com/abrignoni/Android-LauncherDB-Parser/

Python 3 Script

The script usage is very simple.

1- Extract the launcher.db file from the indicated path.
2- Place it in the same directory as the script

Just place the launcher.db file in the same directory as the script.
3- Run the script by typing: python launcher.py

EZ PZ
4- A timestamped named directory will be created for each run of the script. Within it you will find html representations of all screens and folders. Be aware that folders can be mapped to their corresponding home screens by the matching of their id numbers.

Notice the timestamp for the run
5- At a minimum you will see the icons directory (contains all the extracted PNG files from the db), the Bottom_Bar.html and MainScreen0.html files. In the following image you can see these html files but also additional ones that correspond to folder screens / content. Notice that these are named ScreenDirectory followed the corresponding id value that will be used to map the report to the folder location on the screen.

All the screens + all the folders/directories
Every relevant data point is included in the report to include the icon if available.


The script will do the necessary time conversions as well.

Conclusion

As always further testing and validation are never out of place. Although my script manages unknown item types it will be really useful to identify additional ones so a proper type descriptor can be included. Hopefully this type of reconstruction can give the examiner a window unto how the user managed the device,  how some might have been group in folders, and maybe which app were of most interest to the user by the fact of being in the home screens and home bar.

As always I can be reached on twitter @AlexisBrignoni and via email 4n6[at]abrignoni[dot]com.

Sunday, September 29, 2019

iOS Snapshots Triage Parser & working with KTX files

Short version

Collection of scripts that assist with parsing iOS snapshot bplists in the applicationState.db data store. Snapshot images show what was last on screen for a particular app before the app is closed or sent to the background. The research behind this artifact was conducted by Geraldine Blay (@iam_the_gia). For details on the meaning and importance of these files read her awesome write up here:
https://gforce4n6.blogspot.com/2019/09/a-quick-look-into-ios-snapshots.html
Script download:
https://github.com/abrignoni/iOS-Snapshot-Triage-Parser
Script purpose and workflow:
  1. Run SnapshotImageFinder.py to identify iOS snapshot images and extract them from a targeted iOS file system extraction directory. These files end with @3.ktx or @2.ktx.
  2. Extract the macOS Automator quick action from the ktx_quick.zip file. With it convert all the .ktx files extracted in step #1 to .png format.
  3. Run SnapshotTriage.py to extract, parse and match the images with the snapshot metadata contained in the extracted bplists from the applicationState.db datastore. This script accepts 2 parameters, the directory where the applicationState.db is located and the directory where the .png files are located.
  4. After execution of the previews steps a triage report directory will be created containing the extracted bplists and iOS snapshot reports in HTML per app. 
Full details with screenshots are below in the long version section of the blog.

Results:

The following image is of a iOS Snapshot report generated SnapshotTraige.py for the Uber app. It is of note that these snapshots can contain chats, images, maps, anything the app was doing at the time it was taken out of focus and/or sent to the background.

iOS snapshot for the Uber app.

Long version


The importance on being able to recover what was on screen when an app was last used is pretty clear. But beyond the evidentiary value of the images and when where they generated this speaks to the pressing need to teach our first responders to limit interactions with a suspect/target device. By manipulating the device on scene this data might reflect the first responder's fingering around instead of data of value for the case.

The functionality that enables the creation of such screenshots in iOS devices is called Snapshots. Android has the same functionality and it is called Recent Tasks. I made a script to parse these, it is located here:
https://github.com/abrignoni/Android-Recent-Tasks-XML-Parser
I became aware fully of the snapshot functionality in iOS by my dear friend and colleage Geraldine Blay. You can and should follow her on Twitter here:
@i_am_the_gia
Her research can be found on her blog post here:
ddsdfdf
I have made a script that automates the extraction of these snapshots and their timestamps. In her blog post you can find the paths and structures needed to validate by hand the automated output from my script.

The script to parse the iOS snapshots is located here:
https://github.com/abrignoni/iOS-Snapshot-Triage-Parser
Script prerequisites:

To parse the iOS snapshots one needs to be working with an iOS full file system extraction. For details on iOS file system extractions see the acquisition section of the following blog post:
https://abrignoni.blogspot.com/2018/08/finding-discord-chats-in-ios.html
Since the app images are in ktx format from Apple, a macOS computer will be needed. After much searching and asking around I have yet to find a python library or CLI tool that can convert ktx images to png. This is necessary since ktx files are not a widely supported image format. Our script output will produce a more universal form of output, png images in html. To do the ktx to png conversion we will use an Automator Quick Action in macOS.

The scripts and detailed workflow:

1. SnapshotImageFinder.py
Used to extract snapshot ktx files from the iOS file system extraction. Usage is really simple:

usage: SnapshotKtxFinder.py [-h] data_dir_to_analyze

This script will recursively go through and pull out all the relevant ktx files. It will place them in the FoundSnapshotImages directory. Here is the CLI output for some sample data I provided to it:


Searching for iOS Snapshot images.Please wait...
Snapshot ktx files moved to: /Users/abrignoni/Desktop/iOS-Snapshot-Triage-Parser-master/FoundSnapshotImages
Snapshot ktx files moved total: 485

2. ktx_quick.zip

This zip file contains a macOS Automate Quick Action that will convert the ktx files to png files automatically.

First unzip the file on your macOS computer and click the unzipped file. When asked if you want to install say yes. If this gives you the heebie-jeebies use Automate to create the conversion file. After install you will see the following:

Installed ktx_quick 
By installing we have the ability to right click on any directory and from the context menu execute our quick action. For our purposes we will do so on the FoundSnapshotImages directory previously mentioned.

Notice the option in blue.
As the action is being run you will see a progress gear on the right side of the top side screen bar.

Progress
When done the converted_snapshots directory will be created on your desktop. Take that directory and place it in the same directory where the python scripts are located.

From here:

At the desktop.
To here:

Inside the directory for the scripts
3. ccl_bplist.py

This module is used to deserialize NSKeyedArchiver bplists. Thanks to Alex Caithness who came up with this module. It saves us a lot of headaches. I added his module to my repo for convenience. It can be downloaded directly from the source here: https://github.com/cclgroupltd/ccl-bplist.

4. SnapshotTriage.py

All output from this script will be created in a timestamped named folder. This allows us to run the script multiple times with any overwriting issues. Thanks to Phill Moore (@phillmoore)for the idea.

All script output directories will be in these.
The script will first locate the applicationState.db file. The following query will be used to identify the blob fields that contain the iOS snapshots bplists.

SELECT
application_identifier_tab.id,
application_identifier_tab.application_identifier,
kvs.value
FROM kvs, application_identifier_tab, key_tab
WHERE application_identifier_tab.id = kvs.application_identifier
and key_tab.key = 'XBApplicationSnapshotManifest'
and key_tab.id = kvs.key

The bplists will have the image filename, bundle id name, and the timestamps of the images. With the filename it is easy to connect the exported images to the corresponding app name/ bundle id and timestamps. As mentioned in previous blog posts bplists sometimes can be incepted. This means a bplist is inside another bplist. This case is one of those. The script will extract the bplist from the database and save the incetpted bplist in the ExtracteBplistFirstLevel directory. After doing so it will extract the internal bplist and save those in the ExtractedBplistSecondLevel directory. The idea of doing so is that it will allow the examiner to validate the whole process step by step as well as provide a way to execute third party tools over the extracted data.

A third folder will be created named Reports. In this folder there will be an HTML for every app on the device that had snapshot data. As stated previously these directories are inside the SnapshotTriageReports_timestamp named directories.

SnapshotTriage.py created directories
When the script runs one can see the bplists per bundle id being processed.

Script running.

The usage for the script is as follows:

Alexiss-MBP:iOS-Snapshot-Triage-Parser-master abrignoni$ python3 SnapshotTriage.py -h
usage: SnapTriage.py [-h] data_dir_snaps data_dir_appState

iOS Snapshot KTX Traige Parser
 Parse iOS snapshot plists and matching ktx files.
positional arguments:  data_dir_snaps     Path  to the Snapshot images Directory  data_dir_appState  Path to the applicationState.db Directory.
optional arguments:  -h, --help         show this help message and exit


The first positional argument is the directory where the png files are located, the converted_snapshots directory. The second positional argument is the directory for the full file system extraction or a directory that has the pertinent applicationState.db file. After the script is done running it shows how many snapshots were processed.

Total apps with snapshots.
Reports

The reports directory has HTML for all the apps with snapshots.
Reports
The reports contain all the images and timestamps found in the processed bplists.
The images the HTMLs reference are copied inside the reports directory within the images folder. This lets us copy or move the reports directory and view the images anywhere.

Hve images, will travel.
Sample report

The type data these screenshots records can include text messages, maps, documents, chats, anything that an app can show the user.

iOS snapshot for the calculator app.

The report allows the user to click on the images to view them full screen in a new tab. This is good since some of the images are downscaled or of horizontal orientation.

Super important: The fist timestamp under each image is the ktx file creation time. Some of the images have 2 timestamps per ktx image. We are currently trying to figure out where that 2nd time comes from. I have added it to the report in the name of completeness. In order to validate these timestamps, taken from inside the bplist, the examiner can go to the original ktx files and look at the creation times directly for each image of interest.

Conclusion

Thanks again to Geraldine Blay for her research. These artifacts can give context to what the user of the device was doing or seeing shortly before the time of seizure/examination. Also note that not knowing of a scripted or direct way of dealing with ktx files does not mean there is nothing that can be done. With a little but of creativity and help we can achieve our goals.


As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Saturday, September 14, 2019

Vendor binaries and data stores: io-prefetcher.db

Short version

Certain Android devices that use Qualcomm processors contain vendor install binaries and libraries that create a SQLite database which keeps track of the name, last use timestamp, and use count of apps on the device.

Vendor library name and location:
vendor/lib/libqti-iopd.so
Database name and location:
userdata/vendor/iop/io-prefetcher.db
The SQL query used to extract the data can be located at the following URL:
https://github.com/abrignoni/DFIR-SQL-Query-Repo/
It is of note that the app data being tracked does not encompass all app activity, only the ones gathered by the vendor binary while its service is enabled. A more complete characterization of app activity can be gathered from the UsageStats.xml files if available.

Testing and analysis platform



Long version

Since the beginning Android devices where designed to allow customization by original equipment manufacturer (OEM) vendors. Such capability permits a company like Samsung to sell Android devices with a different user interface that the one that comes with the stock (direct from Google) Android operating system. This ability is also used by hardware vendors for firmware updates and diagnostic purposes.

In order to enable this functionality modern Android devices have a partition named vendor that stores these 3rd party libraries and binaries. A short and to the point explanation of this concept can be found here: https://android.stackexchange.com/questions/205200/what-happens-if-vendor-partition-is-corrupted.

An interesting example of this capability  can be seen in the creation and use of the io-prefetch.db SQLite database. Some Samsung devices that use Qualcomm hardware track device application use and frequency via a SQLite database named io-prefetcher.db. This database is located, as seen in the next image, in the userdata/vendor/iop/ directory.

Files in these directories tend to be related to hardware matters.
The table io_pkg_tbl contains the following columns:
  • pkg_name
  • pkg_last_use
  • pkg_use_count
The following image shows some sample content from the database.

io-prefetcher.db
An interesting factoid of this file is that its creation date matches a recovery event from the device data contained in the following directory and log file: recovery/last.log.10. This, plus the fact that the database did not reside in the userdata/data directory, made me think that the database had something to do with a native functionality of the device. When I saw some of the other files in the vendor folder I assumed that the native functionality had to do with Qualcomm hardware in some way. A string search of the forensic image for table names used in the database led me to the following directory and file: vendor/lib/libqti-iopd.so. The QTI nomenclature stands for Qualcomm Technologies Inc.

Notice the libqti-iopd.so file resides in the vendor partition. These .so files are binaries/libraries. For a full explanation see the Android Concepts document here: https://developer.android.com/ndk/guides/concepts.html. Note that if you look at some of these .so files in a hex editor the file signature is ELF. For details on that see here: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format.

Vendor partition and target directory

The following image will show some of the ASCII content of this file. Pure SQL statements that correspond to the creation, use, and update of the io-prefetcher.db file.


In order to have a better understanding of the file I used a simple online decompiler to look at additional ASCII values. 

Notice the call to proc
I found the call to proc, among some others, to be interesting in regards to how the data is populated in the database by interrogating the system about what processes are running at the time of query. For details on what proc is see here: https://linux.die.net/man/5/proc

Why is this of any importance?

When I focused my attention at the pkg_last_use values I noticed that these matched entries in the Android UsageStats xml files. As way of background these xml files keep track of app user activity. For details see here: https://abrignoni.blogspot.com/2019/02/android-usagestats-xml-parser.html. Every pkg_last_use value I checked had a MOVE_TO_FOREGROUND corresponding value in UsageStats. The timestamps was the same or a second off. 

This is the value for one the Facebook packages in the io-prefetcher file:

Facebook pkg_last_use timestamp

This is the value at the same timestamp in the UsageStats xml file:

Facebook UsageStats timestamp

With all this being said be aware that the pkg_last_use date might not be the last time UsageStats has user generated app activity. Also know that the pkg_use_count values per package are less than the ones kept in UsageStats. It is obvious then that UsageStats gives us a more detailed and complete picture of app activity. That being the case, why bother with this database?

Looking at the contents of io-prefetcher.db might be of use due to the following:
  1. Not all examined devices will have UsageStats available. 
  2. The database will keep entries for installed as well as deleted apps.
  3. Even though there is no way to validate all the entries in the pkg_use_count field one can use these values as information to quickly determine what apps where of most interest to the user. This can refocus examination priorities or determine if person that is denying ever using an app is telling the truth.
Important to note that analysis like the previous will have limitations that will only come to light via testing. For example to simply assert that the pkg_use_count has all the times an user executed an app or that it exclusively marks user generated activity can lead to serious error. Recovered data can be informative even if incomplete or not totally understood as long as we don't try to make it say more than what it actually does. I think of it as intelligence that will drive further investigative steps.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Wednesday, August 21, 2019

iOS 11 & 12 Notifications Triage Parser

Update 9/21/19: Parser now so processes iOS 11 notifications. See usage example at github link below. 

Short Version

Introducing a Python 3 script that looks for the UserNotifications folder in iOS 12 full file system extractions and parses the iOS notifications to easily triage their content. The script detailed below is a technical application of the research done at d204n6.com by my friend Christopher Vance that he kindly shared with me before making it public. Check out his blog on the topic at:
https://blog.d204n6.com/2019/08/ios-12-delivered-notifications-and-new.html
Script download:
https://github.com/abrignoni/iOS-Notifications-Parser
Script purpose:

  1. To parse the iOS notifications plists for every app that has the functionality enabled. 
  2. Make a report of the plist contents giving the user the ability of hiding non-human readable, repeatable, and well know data by pressing a button on the HTML report.
  3. Report on and export any incepted bplists (full bplists found within a plist) for further analysis.
Script reason:
  1. As stated in d204n6.com there can be a wealth of data in the iOS screen notifications to include snippets of user generated content like chat messages, images received, and distinct alerts that might not be accessible in other ways.
Pre-requisites
  1. Python 3.
  2. Full file system extraction of an iOS 12 device or the UserNotifications directory. If extracting the directory itself for processing be aware that the script depends on the UserNotification directory (where notifications on iOS are kept) to be at least one level down (or more) from the data directory provided to the script.

Long Version 

When Chris shared his latest research with me I was immediately impressed on how much relevant data is contained in iOS notifications. For further details his blog post above is required reading. In this post I will only go into how to use the script to triage these important plists that seem to be overlooked but shouldn't.

Script usage

After downloading the script and configurations files you should see 4 files in the image.

Scripts and configuration files.

  1. ccl_bplist.py

    Used to deserialize NSKeyedArchiver bplists. Thanks to Alex Caithness came up with this module. It saves us a lot of headaches. Added his module to my repo for convenience. It can be downloaded directly from the source here: https://github.com/cclgroupltd/ccl-bplist
  2. iOSNotificatonsParser.py

    This is where the magic happens. It searches in a specified directory for the UserNotifications directory and when found parses the DelivereNotifications.plist for every app that has notification data.
  3. NotificationsParams.txt

    It contains strings that I consider to be common, unreadable, or repetitive. The items in the list (one per line) are used to determine if they are to be hidden, not eliminated, from the final report. Be aware that the final HTML report has a couple of buttons that allow you to hide or show those fields as needed. To add more string to hide just add a new line to the text file. One string per line.
  4. script.txt

    Contains the javascript necessary to enable the hide/show functionality in the HTML report. It gets added to each report at processing time.
Usage.

The script only has one parameter, the data directory to be parsed. See the help below.

When the script runs it tells you what notification is parsing and if a bplist was found within the plist. If found it will tell you that it was exported.


See the highlighted section above that shows a bplist exported. When done it also advises how many plist were processed, how many exported bplist, and how long processing took.


After the script runs a report directory will be created in the same location where the script resides.


As seen the report directories are timestamps so the script can be run multiple times and each time it will generate a new report directory. Within the directory each app has its own unique directory named after the app's bundle id.


Each app will have a report and exported bplist if any exists. For the screen time notifications in this data set one sees the following:

Each HTML report has a header and the Hide/Show buttons on the top.


Let's zoom in a little on the buttons.

As Christopher explains some of the ASCII values might not be important, unreadable, too much, or are simply repetitive. Hide rows hides them as explained previously by referencing the content of the NotificationsParams.txt file.

It will go from tons of pages to something like the following:

This is the same report. It has hidden a lot of repetitive data. Important note. It is worthwhile to always look at the full report if the app is important to the case. The report is only for triage purposes and will always require validation after execution. This is even more true when talking about the contents of NS.data within a plist. In some cases is either data that is not relevant or unreadable. In many cases it can contain a full bplist in them. The report deserializes this data and lets you read it. It is hard to read due to a lack of proper formatting but at least it will let you know if further analysis is warranted. Here is how a bplist in an NS.data field would look on the report.


Yes, hard to read but still it can be read. If anything pertinent is found then go and take the exported bplist and use any viewer for further and proper analysis.


Here is an example of the exported bplist and how a third party viewer shows you the data with ease.

Future work

As stated in Christopher's blog post there are additional data sources in the iOS notifications directory. I plan on making parsers for these as well. Like everyone else on this floating rock in space, when can have to many things but the thing we will never have enough of is time. If only the days had more hours and our bodies less need for sleep.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.