Friday, March 8, 2019

UsRT - Graphical interface for Android Usagestats and Recent Tasks XML parsers.


Introducing UsRT

Thanks to the hard work of Chris Weber (@RD4N6) we now have a way to parse the essential data contained in the Android Usage Stats and Recent Tasks XML files through a graphical interface. Like Eric Zimmerman says it is agent proof. Chris took my scripts, based on the research done by Jessica Hyde (@B1N2H3X), and made them accessible to all. Point and click goodness.

The application can be run as an executable (UsRT.exe) via the provided installer or through the python scripts directly. The installer has all dependencies included and is the easiest and fastest way to use the parser.

For details on the original research that motivated these scripts and the interface see Jessica Hyde's research at the SANS DFIR Summit 2018. For details on the parsing scripts see my previous blog posts for Usage Stats and Recent Tasks.

Script and installer links at the end of the blog post.

Features for Usage Stats:
  • Case information fields

  • Visual listing of files as they are processed in the left bottom corner of the interface



  • Rows and columns format with the ability to hide columns and select all rows, check rows or unchecked rows.



  • HTML reporting



  • Ability to open already processed cases through the application generated case json file.
  • Included Read Me file that has a quick overview on usage with related screenshots. The Read Me can be accessed via the Help menu options.
Features for Recent Tasks:
  • Same features as Usage Stats with the addition of the recent images and snapshot fields. Pressing on the images will show them in your system's default image viewer. HTML reporting include images as well.


Repository and installer

To get the scripts go to the following repository:
https://github.com/abrignoni/UsRT
The installer is in the same repository in the release tab.
https://github.com/abrignoni/UsRT/releases 
Conclusion

As said at the beginning of the post I am indebted to Jessica Hyde for doing the original research and to Chris Weber for putting all work an effort to maximize the use of the parsing scripts by making an awesome graphical interface for them.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.



Saturday, March 2, 2019

iOS Bplist Inception

Short version:

Python 3 script that export compound bplists from a specific field on a iOS knowledgeC database, extracts the internal bplist and creates a triage html report of its contents. Two versions are provided, for iOS 11 and iOS 12, due to a slight difference on how the internal bplist is referenced within the external that holds it.

The scripts can be found in the following location:
https://github.com/abrignoni/iOS-KnowledgeC-StructuredMetadata-Bplists
It's recommended that you load these plists into your viewer of choice to examine them directly.

Long version:

Like most DFIR things lately this one also started with Phill Moore. He reached out to the community on the following:


Since I've been on a data parsing binge of lately I was happy to try and assist. As I was reading the replies to Phill's tweet I was reminded of how, of all the data structures utilized by Apple products, bplists are one of the most prevalent. So prevalent that these can be contained within SQLite databases and they themselves contain other bplists within them. Total data storage inception. At this point there was no doubt...


Thanks to kind souls like @i_am_the_gia@ScottVance, and others who will remain anonymous; we got test data to see if we could do the following:

  1. Export the bplists intact from the SQLite DB.
  2. Extract a bplist (clean) from the bplist that holds it (dirty.)
  3. Access the clean bplist and create a file that could be used in forensic tools for analysis.
  4. Generate a triage report of clean bplist data contents to easily evaluate relevance before importing to forensic tools.
There are many tools that let us view the contents of bplists but when these are nested in such a way getting to the internal content requires some manual work. Like any and all examiners the world over manual work is just the universe telling you there is a need to automate and scale. 

The database selected for our testing was the iOS knowledgeC database. I highly recommed everyone reads Sarah Edwards' article on it, THE article on it. By looking at the Z_DKINTENTMETADATAKEY__SERIALIZEDINTERACTION field within the ZSTRUCTUREDMETADATA table can see how these bplists look when nested.


Notice how there are two bplist headers in the same SQLite database content. 

Export

Exporting the data was a straight forward action. Regular SELECT and assigning the content of the field to a variable that would be written to a file. For this to work the receiving file has to expect binary content. As seen in the next image the extracted bplist are named in the following convention:
  • D/C = Dirty or clean There is nothing wrong or dirty about the shell bplist. It is a shorthand in opposite to the internal bplist which I called clean after extraction due to a lack of its bplist shell. 
  • Z_PK = The field name in the table that contained the primary key for the row that contained the exported bplist.
  • Numeric value = Integer contained in the Z_PK field for the row that contained the exported bplist.

By establishing this filename convention the examiner can easily backtrack to the proper row from the target table if additional fields are of interest or if there is a question on the validity of the exported bplist.

Extraction

Now that we had exported the bplist we had to get to the clean one in a automated way. Thanks to @firmsky I was reminded of an article by Sarah Edwards on the use of ccl_bplist for the parsing of NSKeyedArchiver bplists in Python. These bplist objects are beyond the scope of this blog but just know that I am grateful that Alex Caithness came up with this module that saved me from experiencing a painful headache. You can find this great module here:
https://github.com/cclgroupltd/ccl-bplist
With this module in hand and some test data we figured out that:
  1. In iOS 11 one has only to deserialize the bplist at the root which gives you the clean bplist.
  2. In iOS 12 one has to desiralize the bplist at the NS.data level since the clean bplist is contained within it.
The previous was a long way of saying that in iOS 11 the following key piece of cll_bplist function
CleanBplistFile = ccl_bplist.deserialise_NsKeyedArchiver(DirtyBplistFile)
would give you the clean bplist ready to write out where as the following code
ns_keyed_archiver_objg = ccl_bplist.deserialise_NsKeyedArchiver(DirtyBplistFile)
CleanBplistFile = (ns_keyed_archiver_objg["NS.data"])
would give you the clean bplist after accessing the NS.data portion. It would be good to have further confirmation that these type of incepted bplist truly vary per iOS version and that is not only a crazy coincidence of the the data sets we had available.

Originally the purpose of this exercise was to find a way to easily extract the clean bplists in order to import them into forensic tools with minimum effort and no manual extraction. It became clear that a triage report was needed when one of my data sets contained 1565 extracted bplists. Be aware that the script developed will keep both the dirty and clean bplists in separate folders within a timestamped directory. In this way one can backtrack the whole process for validation purposes.



Reporting

With a triage report that shows the content one can decide which set of bplists should be drilled down more or just retained due to work or case relevance. The fields on the html formatted report are the following:
  • Filename = Same format as stated before.
  • Intent Class = This is a value taken from a field in the table where the dirty bplists where stored in the knowledgeC database. This value is key cause it gives you a clue of the purpose of the contents of the bplist.
  •  Intent Verb = Another value taken from one of the table fields. Further description of bplist purpose and/or type of content.
  • NSstartDate = Time stamp.
  • NSsendDate = Time stamp.
  • NSduration = Float value.
  • NSdata = Binary data store of activity.
Since the report is a triage report the NSdata values are just a string representation of the binary values in it. Although it contains many non human readable characters it is pretty easy to key in on those ASCII values that one can easily read. The report is a testament to my ignorance on how to convert these values to something more pleasing to the eyes, but for triage purposes that help the examiner decide what to process with a forensic tool further it is perfect. Some of the values can cleaned up a little with UTF-8 decoding but many, especially those that contain a lot of data, are not.

The next picture is an example of the report format. The particular data in the report was shared with the condition that it would not be share hence the redaction of it.


It is up to the reader to test it out and discover for herself what awesome data resides in these structures. Things that are, things that were in one form and changed to another, and things that are no more.

Future work

I was surprised by the amount of data contained in just one field from one table in one database. I can only imagine what relevant data resides in incepted SQLite held bplists in other tables and other databases. The next step is to evolve the script so it can extract any bplist blob from any SQLite table and generate dirty and clean instances as needed with complementing reports for triage. A key part is to better better understand how the NSdata fields work to see if anyone in the community knows how to parse them.  If only the days had more hours and our bodies less need for sleep.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Tuesday, February 19, 2019

Android Recent Tasks XML Parser

This post is a continuation of my last blog post where I introduced a simple parser for the Android usagestats XML files.
https://abrignoni.blogspot.com/2019/02/android-usagestats-xml-parser.html
In this entry I am introducing a parser for the Android recent tasks XML files. Like the previous parser it is based on the research done by Jessica Hyde that she presented at the SANS DFIR Summit 2018. You can see her excellent presentation here:
YouTube: Every Step You Take: Application and Network Usage in Android 
The presentation slides, in PDF format, can be found here:
PDF Slides: Every Step You Take: Application and Network Usage in Android 
As explained in the presentation the Recent Tasks XML files record the following activities for recently used apps:

  • Task ID number = Used to correlate snapshot and recent image files.
  • Effective UID = App identifier.
  • First active time = Timestamp in millisecond epoch time.
  • Last active time= Timestamp in millisecond epoch time.
  • Last time moved = Timestamp in millisecond epoch time. 
  • Affinity = Bundle ID name.
  • Calling package = Bundle ID or process that called the referenced recent task.
  • Real activity = Gives information on app usage at time of recording and snapshot creation.
These XML files are located in the following directory:
\system_ce\0\recent_tasks
In addition to these XML files, recent tasks can produce snapshot images as well as recent images. Details about these are contained in the previously referenced presentation. These images can be found in the corresponding directories:
\system_ce\0\shortcut_service\snapshots
\system_ce\0\recent_images

In order to leverage the data contained in these XML files and images I made a parser in Python 3 that takes the XML information and puts it in a SQLite database for ease of querying. The script can be found here: 
https://github.com/abrignoni/Android-Recent-Tasks-XML-Parser
Warning:

The script has been tested and found to be accurate on my own data sets. Not all recent tasks will contain all data events or related images. Additional testing and validation of the script is humbly requested and more than welcomed.

Script usage

1. Extract from your Android source device the three directories mentioned previously. Extraction should be logical and not contained forensic tool generated recovered items like deleted and/or file slack files.

2. Place the script and the noimage.jpg files from the repository in the same root directory as the extracted directories.

Have this before running script.
3. Run the script with no arguments.

Script is done.
4. When completed the script will generate two files, a SQLite database named RecentAct.db and a report file named Recent_Activity.html.

What you should see after a successful run of the script.

Note that the RecentAct.db SQLite file will contain two fields populated with all the XML attributes in JSON format. The analyst can run a query using JSON_extract to custom generate queries with any of the attributes within the XML.

5. Open the Recent_Activity.html report.

Sample report entry.
For every recent task there will be a table with pertinent information as well as the snapshot and recent image files that correspond to it. To view the images full size just click on them. Be aware of the importance of the creation times of these image files within the source media. For details see the presentation previously mentioned.

It is of note that not all recent tasks, in some of my test data samples, had corresponding images or full sets of attributes. When a recent task lacks corresponding images the script will reference the noimg.jp file.

Missing image and missing attributes.
For missing attributes the report will state 'NO DATA' and/or 'NO IMAGE' in the Key and Values columns as needed. Be aware that the SQLite database has all attributes in JSON format for custom query generation.

Conclusion

I want to thank again Jessica Hyde for her research and for making the community aware of these artifacts. Hopefully this script can make it easier to give much needed context to these images and apps whose value might not be found anywhere else on the source device.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Sunday, February 17, 2019

Android Usagestats XML Parser

As I've been testing and using Sarah Edwards' excellent APOLLO pattern of life framework for iOS I reminded myself of the great work done by Jessica Hyde on a similar set of files for Android called usagestats. These files provide insight on what apps where being used, if they were in the foreground or background, and how long have the apps been active among many other forensically interesting data points.

In her presentation for the SANS DFIR Summit 2018, Jessica Hyde explains how the usagestats XML files record the following activity from Android devices:
  • User interaction
  • Move to foreground
  • Move to background
  • Configuration changes
I highly recommend the reader check out her Every Step you Take DFIR Summit video and presentation slides on PDF format. The rest of the blog post will make more sense after the viewing and/or reading of her work.

In order to leverage the data contained in these XML files I made a parser in Python 3 that takes the XML information and puts it in a SQLite database for ease of querying. The script can be found here: 
https://github.com/abrignoni/Android-Usagestats-XML-Parser
Warning:
The script has been tested and found to be accurate on my own data sets. Additional testing and validation of the script is humbly requested and more than welcomed.

Script usage

Extract from your source Android device the following directory:
\data\system\usagestats\
Export the usagestats directory
 Place the script at the in the same root directory as the just extracted usagestats directory.

Side by side as such.
Run the script with no arguments from the root directory that contains both the script and the usagestats directory. The script will parse all the internal directories and files for you.

No arguments are needed.
The script will also alert you of files whose content is not XML that can be parsed. After the script ends a sqlite database named usagestats.db is generated.

Data is served.
Use your favorite SQLite application to view the parsed data. Notice the timestamps are in epoch time.

Notice the fields.
The generated database contains a table named data with the following fields:

Usage_type: 

Each XML file contains a description of what type of data is recording. The values can be event-log, configuration, and packages.

Lastime:

Records when an app (package) was last active or when a configuration took place. The XMLs themselves keep track of these time in two ways. Most usage events maintain a count of how many milliseconds have passed since the creation of the XML file and the occurrence of the event. To calculate the timestamp of the event the script takes the XML filename, which is the epoch time of the file itself in milliseconds, and adds to it the milliseconds the event took to occur. This provides the event time as en epoch timestamp. For events that are not millisecond offsets from the epoch time filename of the XML file, they keep the time as en epoch timestamp preceded by a minus sign. The script eliminates the minus sign to keep the epoch timestamp. My testing has shown this way of calculating times to be accurate to activity I have taken on the device.

The following image was included in an app review I did in November for the TikTok Android application. Notice the time some of the chat activity took place.

Notice the created time
As seen in the image above the TikTok app is sending and receiving chat messages. The parsed XML SQLite database shows the same activity at the same time being totally consistent with TikTok chatting.

Notice the classs values
Shortly I will provide a SQL query that will format the dates and type values in the same human readable format seen above.

Time_active
Certain events keep track of their length in milliseconds.

Package:
Application name.

Types:
Activity types as integer values. These represent activity like move to background or move to foreground. The list of interactions can be found here:
https://developer.android.com/reference/android/app/usage/UsageEvents.Event
Classs:
Application name and corresponding modules in use.

Source:
Usagestat originating XML category. They are daily, weekly, monthly, and yearly.

Fullatt:
Contains the full attributes for the XML event, in other words all the data for the even in JSON format. With the data in this field the analyst can easily select any key:value pair and make it its own column in a SQL query by the use of JSON_Extract. For an example on how this SQL query function works see here:
https://abrignoni.blogspot.com/2018/09/finding-slack-messages-in-android-and.html
SQL query

The following query can be run against the script generated database to format the timestamps from UTC to local time, add a field for time_active in seconds, and changes the types integer values to readable activity descriptions. Be aware that I have not added all case types per the link provided previously in the Types section. Add as needed.

SELECT
  usage_type,
  datetime(lastime/1000, 'UNIXEPOCH', 'localtime') as lasttimeactive,
  timeactive as time_Active_in_msecs,
  timeactive/1000 as timeactive_in_secs,
  package,
CASE types
     WHEN '1' THEN 'MOVE_TO_FOREGROUND'
     WHEN '2' THEN 'MOVE_TO_BACKGROUND'
     WHEN '5' THEN 'CONFIGURATION_CHANGE'
     WHEN '7' THEN 'USER_INTERACTION'
     WHEN '8' THEN 'SHORTCUT_INVOCATION'
     ELSE types
END types,
  classs,
  source,
  fullatt
FROM data
ORDER BY lasttimeactive DESC

Conclusion

My hope with this script is to make accessible the data contained in the usagestats xml files for digital forensic case work. Additional testing of the script and suggestions on how to optimize it are welcomed. I hope to create additional scripts that will parse the Android battery status and recent tasks XML files as show by Jessica Hyde in her presentation.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.



Wednesday, January 16, 2019

QuickPic for Android - Don't forget external/emulated storage!

QuickPic Gallery for Android

QuickPic is an image gallery app for Android devices that used to be fairly popular before it was taken down from the Google Play Store. Since the app is still available via third-party APK repositories we might still come across it in our case work.

Logo

The main reason for this blog post on the QuickPic app is that it illustrates the value of checking related external, emulated, and adoptable storage in Android devices. In this particular app all the databases that track user generated activity were kept outside of the main app directory.

QuickPic app keeps relevant databases in the following directory:

/storage/emulated/0/Android/data/com.alensw.PicFolder/cache

SQLite Queries

Queries that can be used as to extract pertinent data can be found at:

https://github.com/abrignoni/DFIR-SQL-Query-Repo/tree/master/Android/QUICKPIC

The following queries are provided:
Quick analysis of SQLite databases

Thumbnails
The app provides thumbnails for all the images it scans as seen in the next image.
Thumbnails
These thumbnails are kept in a database titled thumbs_numeric-value.db where the words in red represent a series of numbers as seen below.

Notice the multiple thumbnail databases
In the image we can see that the databases of interest are named thumb_123904.db and thumb_220900.db. It appears that when one database reaches a particular point (size? directory paths? amount of records? year?) it continues registering thumbnail data in a second database.

The thumbnail databases contains a table named thumbs with the following columns: path, thumb, and modified.

Thumbnail database content
The columns provide the path where the original image is located on the device, a modified timestamp, and a thumbnail of the original image. To view the image export the blob within the thumb column and rename it with their proper extensions as seen in the path column. The usefulness of having a copy of the data in thumbnail form is apparent. This is even more so when the examiner has access to multiple data backups, be it by Android itself or by third party backup apps, of the same device.

Previews
When the user presses one of the thumbnails images in the application, in order to view a larger version of the image, the app keeps track of such activity in the preview database. The database can be seen in the cache directory image seen above. The preview database contains a table named cache with the following columns: document_id, _data, _size, accessed_time, and last_modified.

Preview database
As seen previous image the document_id columns keeps the location path for the original image. the _data column keeps the location path for a copy of the original image that was selected by the user. These images are kept, as seen in the path, in the .preview directory. The preview images themselves have alphanumeric names with no extensions. Just like the images in the thumbnails database one can add the proper extension to the preview files to view them. The rest of the fields are for preview image size, accessed time and last modified time. 

It is of note that files in the .preview directory tell us that the user had to select/press the thumbnail of a media item of interest in order to view a large version of it. It requires user interaction. 

Conclusions

This quick analysis made me think of how many times we can be in a hurry and overlook related app directories that are contained in SD cards or emulated storage space. Databases that tell us about user intent, what was selected for viewing, and what was actually seen can be missed if we don't make it a habit to always look for unfamiliar app ids in app directories AND emulated/external storage. A one minute check can provide us with amazing returns we did not expect. 

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.



Sunday, January 6, 2019

iOS Mobile Installation Logs Parser

In the last two blog posts I wrote about ways of obtaining a list of currently installed apps and their corresponding app directories from an iOS file system extraction. My usual method is to query the contents of the applicationState.db file to find the app bundle id and what directory GUID like name corresponds to it. By finding the proper directory one can focus on the data stores it contains for parsing of user generated data when our forensic tools are not aware of them.

On my second post I received great feed back from Sarah Edwards. She pointed me to the contents of the mobile installation logs in iOS.

Cool stuff!
I immediately wondered if there was a script that could parse those logs for the data I was looking for. After asking Sarah Edwards and looking online I didn't find any.

I'll do it.
The link below is for a python script I made that parses the mobile installation logs.

https://github.com/abrignoni/iOS-Mobile-Installation-Logs-Parser

These logs contain a lot of information. Currently the script only extracts the following events:
  • App install successful with date and time.
  • App container made live with date, time, and path.
  • App container moved with date, time, and path..
  • App destroying container with date, time, and path.
Here is a sample screen of how the logs look as taken from the device. The image has been zoomed out which could make it harder to read unless the preview image is clicked.

Lots of data.
The script is really simple to use.
  1. Have Python 3.6.4 or newer installed.
  2. Extract the logs from the /private/var/installd/Library/Logs/MobileInstallation/ directory.
  3. Place the script in the same directory as the extracted logs.
  4. Execute the script via CMD.
The screen, after done, should look something like this:

Run complete. Some stats.

The script will produce one SQLite database called mib.db, a directory named Apps_State, and a  directory named Apps_Historical.

Script with generated items.

The SQLite database holds the extracted information from the lines of log data. The script queries the database to produce the contents within the two directories.

The Apps_State directory can have two files within it. These are named InstalledApps.txt and UninstalledApps.txt. The contents reflect the name of the text files. Here is a sample image of InstalledApps.txt content:

List of installed apps.
Having this list handy is really useful since it can be used to compare the currently installed apps within the file system image that might have been missed by our third party forensic tool of choice. 

If one would like more context in regards to when the app was installed and where the app directory is located the Apps_Historical directory has all that information per app.

A txt file for every app.
Here is a sample of the historical information regarding an installed app.

Historical events for com.blizzard.social
Notice the report has a timestamp for every event. The scrip puts the most recent events at the top so the current path for the application directory can be at or near the top.

Here is a sample of historical information regarding an uninstalled app.

Historical event for org.videolan.vlc-ios
 Like the previous report there is a timestamp for every entry and events start with the most recent at the top. These report are useful if one wants to determine when an app was uninstalled or if a current app was uninstalled and then reinstalled multiple times.

Historical events for org.coolstar.electra1131
Notice the multiple 'Destroying', 'Made', and 'Install Successful' entries in the report. Again the most recent ones are at the top.

As seen above the script's output is responsive to the original requests of installed apps with corresponding app directory paths. It goes further by identifying uninstalled apps and by providing timestamps and historical app event aggregation.

By looking at the logs there seems to be further areas where the script can be improved. This is my soon to-do list:
  • Report on 'detected reboot' and other log entries that indicate system state as opposed to particular app states.
  • Add 'updated bundle entries' for 'container made live' context in the historical reports.
  • Add 'attempting delta patch update' and app version information in the historical reports.
  • Add 'uninstall requested' and 'uninstalling identifier' in the historical reports.
I can't thank enough my colleague @i_am_the_gia for testing out the script on her data sets and Sarah Edwards for making me aware of the logs.

If anyone gives the logs a look and finds further items to report on or wants to give other feedback I can, as always, be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.