Saturday, September 14, 2019

Vendor binaries and data stores: io-prefetcher.db

Short version

Certain Android devices that use Qualcomm processors contain vendor install binaries and libraries that create a SQLite database which keeps track of the name, last use timestamp, and use count of apps on the device.

Vendor library name and location:
vendor/lib/libqti-iopd.so
Database name and location:
userdata/vendor/iop/io-prefetcher.db
The SQL query used to extract the data can be located at the following URL:
https://github.com/abrignoni/DFIR-SQL-Query-Repo/
It is of note that the app data being tracked does not encompass all app activity, only the ones gathered by the vendor binary while its service is enabled. A more complete characterization of app activity can be gathered from the UsageStats.xml files if available.

Testing and analysis platform



Long version

Since the beginning Android devices where designed to allow customization by original equipment manufacturer (OEM) vendors. Such capability permits a company like Samsung to sell Android devices with a different user interface that the one that comes with the stock (direct from Google) Android operating system. This ability is also used by hardware vendors for firmware updates and diagnostic purposes.

In order to enable this functionality modern Android devices have a partition named vendor that stores these 3rd party libraries and binaries. A short and to the point explanation of this concept can be found here: https://android.stackexchange.com/questions/205200/what-happens-if-vendor-partition-is-corrupted.

An interesting example of this capability  can be seen in the creation and use of the io-prefetch.db SQLite database. Some Samsung devices that use Qualcomm hardware track device application use and frequency via a SQLite database named io-prefetcher.db. This database is located, as seen in the next image, in the userdata/vendor/iop/ directory.

Files in these directories tend to be related to hardware matters.
The table io_pkg_tbl contains the following columns:
  • pkg_name
  • pkg_last_use
  • pkg_use_count
The following image shows some sample content from the database.

io-prefetcher.db
An interesting factoid of this file is that its creation date matches a recovery event from the device data contained in the following directory and log file: recovery/last.log.10. This, plus the fact that the database did not reside in the userdata/data directory, made me think that the database had something to do with a native functionality of the device. When I saw some of the other files in the vendor folder I assumed that the native functionality had to do with Qualcomm hardware in some way. A string search of the forensic image for table names used in the database led me to the following directory and file: vendor/lib/libqti-iopd.so. The QTI nomenclature stands for Qualcomm Technologies Inc.

Notice the libqti-iopd.so file resides in the vendor partition. These .so files are binaries/libraries. For a full explanation see the Android Concepts document here: https://developer.android.com/ndk/guides/concepts.html. Note that if you look at some of these .so files in a hex editor the file signature is ELF. For details on that see here: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format.

Vendor partition and target directory

The following image will show some of the ASCII content of this file. Pure SQL statements that correspond to the creation, use, and update of the io-prefetcher.db file.


In order to have a better understanding of the file I used a simple online decompiler to look at additional ASCII values. 

Notice the call to proc
I found the call to proc, among some others, to be interesting in regards to how the data is populated in the database by interrogating the system about what processes are running at the time of query. For details on what proc is see here: https://linux.die.net/man/5/proc

Why is this of any importance?

When I focused my attention at the pkg_last_use values I noticed that these matched entries in the Android UsageStats xml files. As way of background these xml files keep track of app user activity. For details see here: https://abrignoni.blogspot.com/2019/02/android-usagestats-xml-parser.html. Every pkg_last_use value I checked had a MOVE_TO_FOREGROUND corresponding value in UsageStats. The timestamps was the same or a second off. 

This is the value for one the Facebook packages in the io-prefetcher file:

Facebook pkg_last_use timestamp

This is the value at the same timestamp in the UsageStats xml file:

Facebook UsageStats timestamp

With all this being said be aware that the pkg_last_use date might not be the last time UsageStats has user generated app activity. Also know that the pkg_use_count values per package are less than the ones kept in UsageStats. It is obvious then that UsageStats gives us a more detailed and complete picture of app activity. That being the case, why bother with this database?

Looking at the contents of io-prefetcher.db might be of use due to the following:
  1. Not all examined devices will have UsageStats available. 
  2. The database will keep entries for installed as well as deleted apps.
  3. Even though there is no way to validate all the entries in the pkg_use_count field one can use these values as information to quickly determine what apps where of most interest to the user. This can refocus examination priorities or determine if person that is denying ever using an app is telling the truth.
Important to note that analysis like the previous will have limitations that will only come to light via testing. For example to simply assert that the pkg_use_count has all the times an user executed an app or that it exclusively marks user generated activity can lead to serious error. Recovered data can be informative even if incomplete or not totally understood as long as we don't try to make it say more than what it actually does. I think of it as intelligence that will drive further investigative steps.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Wednesday, August 21, 2019

iOS 12 Notifications Triage Parser

Update 9/21/19: Parser now so processes iOS 11 notifications. See usage example at github link below. 

Short Version

Introducing a Python 3 script that looks for the UserNotifications folder in iOS 12 full file system extractions and parses the iOS notifications to easily triage their content. The script detailed below is a technical application of the research done at d204n6.com by my friend Christopher Vance that he kindly shared with me before making it public. Check out his blog on the topic at:
https://blog.d204n6.com/2019/08/ios-12-delivered-notifications-and-new.html
Script download:
https://github.com/abrignoni/iOS-Notifications-Parser
Script purpose:

  1. To parse the iOS notifications plists for every app that has the functionality enabled. 
  2. Make a report of the plist contents giving the user the ability of hiding non-human readable, repeatable, and well know data by pressing a button on the HTML report.
  3. Report on and export any incepted bplists (full bplists found within a plist) for further analysis.
Script reason:
  1. As stated in d204n6.com there can be a wealth of data in the iOS screen notifications to include snippets of user generated content like chat messages, images received, and distinct alerts that might not be accessible in other ways.
Pre-requisites
  1. Python 3.
  2. Full file system extraction of an iOS 12 device or the UserNotifications directory. If extracting the directory itself for processing be aware that the script depends on the UserNotification directory (where notifications on iOS are kept) to be at least one level down (or more) from the data directory provided to the script.

Long Version 

When Chris shared his latest research with me I was immediately impressed on how much relevant data is contained in iOS notifications. For further details his blog post above is required reading. In this post I will only go into how to use the script to triage these important plists that seem to be overlooked but shouldn't.

Script usage

After downloading the script and configurations files you should see 4 files in the image.

Scripts and configuration files.

  1. ccl_bplist.py

    Used to deserialize NSKeyedArchiver bplists. Thanks to Alex Caithness came up with this module. It saves us a lot of headaches. Added his module to my repo for convenience. It can be downloaded directly from the source here: https://github.com/cclgroupltd/ccl-bplist
  2. iOSNotificatonsParser.py

    This is where the magic happens. It searches in a specified directory for the UserNotifications directory and when found parses the DelivereNotifications.plist for every app that has notification data.
  3. NotificationsParams.txt

    It contains strings that I consider to be common, unreadable, or repetitive. The items in the list (one per line) are used to determine if they are to be hidden, not eliminated, from the final report. Be aware that the final HTML report has a couple of buttons that allow you to hide or show those fields as needed. To add more string to hide just add a new line to the text file. One string per line.
  4. script.txt

    Contains the javascript necessary to enable the hide/show functionality in the HTML report. It gets added to each report at processing time.
Usage.

The script only has one parameter, the data directory to be parsed. See the help below.

When the script runs it tells you what notification is parsing and if a bplist was found within the plist. If found it will tell you that it was exported.


See the highlighted section above that shows a bplist exported. When done it also advises how many plist were processed, how many exported bplist, and how long processing took.


After the script runs a report directory will be created in the same location where the script resides.


As seen the report directories are timestamps so the script can be run multiple times and each time it will generate a new report directory. Within the directory each app has its own unique directory named after the app's bundle id.


Each app will have a report and exported bplist if any exists. For the screen time notifications in this data set one sees the following:

Each HTML report has a header and the Hide/Show buttons on the top.


Let's zoom in a little on the buttons.

As Christopher explains some of the ASCII values might not be important, unreadable, too much, or are simply repetitive. Hide rows hides them as explained previously by referencing the content of the NotificationsParams.txt file.

It will go from tons of pages to something like the following:

This is the same report. It has hidden a lot of repetitive data. Important note. It is worthwhile to always look at the full report if the app is important to the case. The report is only for triage purposes and will always require validation after execution. This is even more true when talking about the contents of NS.data within a plist. In some cases is either data that is not relevant or unreadable. In many cases it can contain a full bplist in them. The report deserializes this data and lets you read it. It is hard to read due to a lack of proper formatting but at least it will let you know if further analysis is warranted. Here is how a bplist in an NS.data field would look on the report.


Yes, hard to read but still it can be read. If anything pertinent is found then go and take the exported bplist and use any viewer for further and proper analysis.


Here is an example of the exported bplist and how a third party viewer shows you the data with ease.

Future work

As stated in Christopher's blog post there are additional data sources in the iOS notifications directory. I plan on making parsers for these as well. Like everyone else on this floating rock in space, when can have to many things but the thing we will never have enough of is time. If only the days had more hours and our bodies less need for sleep.

As always I can be reached on twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.


Saturday, August 10, 2019

ARTEMIS - Android support for APOLLO

Short Version

Introducing a Python 3 script, with corresponding modules, that extend Sarah Edwards' APOLLO framework support to Android devices.

ARTEMIS

Name:
ARTEMIS (Android Review Timeline Events Modular Integrated Solution)
Purpose:
  1. Python 3 script that parses Android UsageStats XML files for automatic ingestion by APOLLO. 
  2. Add pattern of life analysis in Android devices to APOLLO.
  3. Continue to evolve ARTEMIS so it can parse additional non APOLLO supported data sources for APOLLO ingestion and analysis.
New APOLLO modules for Android devices:
  1.  UsageStats 
  2.  ContextLog 
  3. SamsungMembersDeviceEvents 
  4. SamsungSmartManagerUsageLog 
Update:
Converted the original Python 2 APOLLO script to Python 3. Works on Windows 10.
Pre-requisites:
 Full file system extraction of Android device will produce best results.
Location:
ARTEMIS, updated APOLLO, and Android APOLLO modules can be found in the following fork/branch: https://github.com/abrignoni/APOLLO/tree/py3-branch here: https://github.com/mac4n6/APOLLO
Long Version 

When Sarah Edwards released APOLLO on November last year I was highly impressed by the importance of aggregating specific pattern of life data (PLD) from iOS devices in a timeline format. Since then I have been looking for PLD in Android devices that can provide the same type of insight that APOLLO aggregates for iOS devices. This search motivated me to create the DFIR SQL Query Repo, a UsageStat XML parser, and multiple Magnet Forensic Artifact Exchange custom artifacts. Yesterday as I was driving home from work I thought about how to aggregate all those PLD artifacts in an APOLLO like format. Then it hit me, not a car but the idea of feeding non-sqlite Android files to APOLLO for parsing and analysis. Hence ARTEMIS was born.

Why

If you are not familiar with APOLLO you will be well served to do so. It can be found here. The Python 2 script has been instrumental in many DFIR cases. In order to extend the functionality I had to port it to Python 3. The only issue with APOLLO was that it was coded in Python 2 which will reach end of life by the end of 2019. For reasons unknown to me I couldn't make APOLLO work in a Windows environment even if I had Python 2 installed. I depended on my trusty Apple computer but that required moving data to and from my main DFIR boxes. By coding APOLLO in Python 3 my Windows machines can play nicely with it. Also since I code in Python 3 I could integrated with my nascent ARTEMIS idea.

How

First order of business was to convert APOLLO to Python 3. Thanks to the Python Modernize project it was trivial to do so. With APOLLO properly converted the next step was to find a way to prepare the largest PLD source in Android for APOLLO ingestion.

One of the main data sources APOLLO leverages for PLD in iOS is the SQLite knowledgeC database. There is no analogous SQLite database in Android. Thanks to Jessica Hyde's research I was made aware that similar PLD information is contained in the UsageStats XML. In order to have APOLLO ingest UsageStats data it had to be converted from XML to SQLite. ARTEMIS purpose is to do this conversion.

Last step was to create a module so APOLLO could parse this new SQLite database. I created a module for that purpose. Also in order to avoid having to call to scripts and move things around manually I use most of the same arguments APOLLO uses so I can have ARTEMIS hand off processing to APOLLO without user intervention assuming both scripts are located in the same directory.

Release

The following images will provide a visual on the results the user can expect ftom the APOLLO/ARTEMIS workflow.

  1.  Have APOLLO and ARTEMIS in the same directory
  2. ARTEMIS will use the YOLO option in APOLLO as default since APOLLO lacks any Android specific arguments. 
  3. Place your Android data sources aggregate in a directory. ARTEMIS, like APOLLO, will not parse forensic images. It will search logical files only.
  4. Place APOLLO modules that support your Android specific SQLite databases. At a minimum include the UsageStats module.
  5. Run artemis.py. After done it will call apollo.py to finish processing.
The following image shows both scripts in the same directory. Your Android logical files will be located in the Data directory. The Android APOLLO modules are in the Artemis_Modules directory. The modules in the Artemis_Modules directory are a product of the research presented at the SANS DFIR Summit 2019. For details on the data these modules parse see here.


Both Python 3 scripts in the same directory 
ARTEMIS uses almost all of the same arguments as APOLLO.

Arguments

Execute the following command:
python artemis.py -o sql Artemis_Modules Data
UsageStats being processed and turned into a SQLite DB
 When ARTEMIS is done with the UsageStats it will call APOLLO to parse the data stores.

Yolo option selected by ARTEMIS
As seen at the end of the previous image the timeline will be found in the apollo.db file. I used the following SQL statement:

SELECT
datetime(Key/1000, 'unixepoch', 'localtime') as time,
Activity,
Output,
Database,
Module
FROM APOLLO order by time asc

End product
It is awesome to see how multiple PLD sources come together to paint a more rich picture of what activity transpired on a device at a particular time.

Future development

A big part of ARTEMIS will be to add support for additional Android PLD sources that are not in SQLite form. In the immediate future I will work on having ARTEMIS automate the conversion of iOS mobile installation log data into a SQLite format for APOLLO ingestion.

Obligatory WARNING!!!!!

The output of these scripts is for lead purposes only. Verification by you is not optional. Be aware that there is always danger involved with timestamps from multiple disparate sources. Always verify the provenance of the data and how the timestamps related to each other.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Friday, July 26, 2019

Android - Samsung Traces of Deleted Apps

Short version

The following Android application artifacts were presented as part of the Traces of Deleted Apps presentation by Christopher Vance and Alexis Brignoni at the SANS DFIR Summit 2019 in Austin, Texas on July 26, 2019. Presentation slides will be available at sans.org/summit-archives.
  • Samsung Members - Keeps a list of an app's display name, package name, is it system, and last used time in the following location, database, and -> table:
data/com.samsung.oh/databases/com_pocketgeek_sdk_app_inventory.db -> android_app 
The app also keeps information on the following events: type (network, install, power, alerts), type values, and creation time. These are kept in the following location, database, and -> table:
data/com.samsung.oh/databases/com_pocketgeek_sdk.db -> device_events
  • Samsung Smart Manager - Keeps a list of apps that have crashed during use. These are kept in the following location, database, and -> table:
data/com.samsung.android.sm/databases/sm.db -> crash_info, excluded_app 
The app also keeps a list of app usage times to include package name, class name, start time and end time. These are kept in the following location, database, and -> table:
data/com.samsung.android.sm/databases/lowpowercontext-system-db -> usage_log
  • Samsung Context Log - Keeps a list of app usage to include timestamp, time offset, app id, app sub id, start and stop time, and duration in milliseconds between start and stop times. These are kept in the following location, database, and -> table:
data/com.samsung.android.providers.context/databases/ContextLog.db -> use_app
These artifacts keep the previously described data even after an app is deleted from the device.

Slight addition to the short version

The artifacts described previously are tied to factory installed apps on Android Samsung devices. These can be used for pattern of life analysis, app usage timelining, as well as indicators of app presence on a device after the app has been deleted.

The SQL queries used to extract the data can be located at the following URL:
https://github.com/abrignoni/DFIR-SQL-Query-Repo
 Within the DFIR SQL Query Repo go to the following locations:
  • Samsung Members
https://github.com/abrignoni/DFIR-SQL-Query-Repo/tree/master/Android/SAMSUNG-SAMSUNG_MEMBERS
These will be available in Magnet Forensics Custom Artifact Exchange after final approval.

Longer addition to the slight addition to the short version

The following screenshots are examples of the type of data contained in these artifacts:

  • Samsung Members

com_pocketgeek_sdk_app_inventory.db -> android_app

App Inventory
com_pocketgeek_sdk.db -> device_events

Events. Notice the Package Installed Event.

  • Samsung Smart Manager
sm.db -> crash_info

Crash apps and time of crash
lowpowercontext-system-db -> usage_log

  • Samsung Context Log
ContextLog.db -> use_app


Notice how the data types being store are almost the same as the ones kept by usage stats.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Wednesday, June 12, 2019

Android - Samsung My Files App

Short version

Samsung mobile devices keep a list of stored media files in the following location and database:
data/data/com.sec.android.app.myfiles/databases/FileCache.db
These same devices also keep track of recent accessed media in the following location and database:
data/data/com.sec.android.app.myfiles/databases/myfiles.db 
The following queries at https://github.com/abrignoni/DFIR-SQL-Query-Repo/ can be used as templates to extract data from the aforementioned databases.

  • FileCache.db
    • Table: Filecache
    • Fields: storage, path, size, date, latest_date
  • myfiles.db
    • Table: recent_files
    • Fields: nane, size, date, _data, ext, _source, description, recent_date
Long version

Samsung devices come preinstalled with the Samsung My Files app. The app can also be used on other branded devices by download and install of the app via the Google Play store.

Samsung My Files app
The app description tells us the main software features.
[Key features]- Browse and manage files stored on your smartphone, SD card, or USB drive conveniently..Users can create folders; move, copy, share, compress, and decompress files; and view file details.
- Try our user-friendly features..The Recent Files list: Files the user has downloaded, run, and/or opened.The Categories list: Types of files, including downloaded, document, image, audio, video, and installation files (.APK)
Stored files analysis

The My Files app directory data resides in the data/data/com.sec.android.app.myfiles directory as seen in the next image.

App directory contents
Within this directory the SQLite FileCache.db file can be found. In the FileCache table one can find information on stored media to include path, size in bytes, date timestamp, and latest timestamp.


A simple query can be produce to extract this data. One can be found here.

Recent files list analysis

Within the same database directory one can also find the SQLite myfiles.db file. The recent_files tables keeps information on recently accessed files as explained in the app description from the Google Play store. This table tracks of file name, size in bytes, data, path, extension, source, description and recent date.


A simple query can be produced to extract this data. One can be found here.

Why does this matter?

A list of files as recorded by the app can give us clues on what files once existed on the device if these files were deleted before the usage of the My Files app. The utility of the recent apps list is even more apparent since we can correlate particular real world events with the last usage of pertinent media on the device. User generated artifacts should be of interest to the analyst, even more so when they intersect with other parts of the case we are working. Only by knowing that such artifacts exist can we make use of them.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.

Tuesday, June 11, 2019

Android - Predictive text exclusions in Samsung devices

Short version

Samsung keyboard predictive text exclusions are located in the following location and database:
data/data/com.sec.android.inputmethod/databases/RemoveListManager
The following query at the https://github.com/abrignoni/DFIR-SQL-Query-Repo
can be used as a template to extract text exclusion entries from the RemoveListManager database.
  • RemoveList
    • Fields: removed_word, time_word_added
SwiftKey predictive text exclusions are located in the following location and text file:
data/data/com.sec.android.inputmethod/app_SwiftKey/user/blacklist
  • Blacklist
    • Text file contents are composed of one excluded word per line. 

Long Version

The following discussion shows how excluded words from the Samsung keyboard's predictive text are stored in Samsung Android devices. 

Predictive text options in Samsung Android phones
Predictive text is an Android feature that learns the user's most used typed words and presents them as options for autocomplete. For example if I type the city name "San Juan" with regularity on my device the next time I start typing "San" the predictive text option will volunteer the full name "San Juan" as an option to complete the word for me. Auto-predictive text saves the user time since I can type the word full word in just 3 taps (San + tap on the suggestion or tap spacebar for auto complete) instead of the 8 taps needed for the full name to be spelled out.

What happens when the keyboard is giving you a suggestion for a word that you don't want constantly for a set of initial letters? Imagine that currently instead of typing "San Juan" constantly you find yourself typing "San Lorenzo" instead since you moved to a new city. Every time you type "San" you get the suggestion "San Juan" instead of "San Lorenzo". By long pressing the suggestion box the keyboard gives you the option to stop suggesting the pressed word moving forward.

What happens to the long pressed word that will now not be suggested anymore? The same process is used for text exclusions for the SwiftKey keyboard app. Where does these excluded or blacklisted words reside? Why would finding these items be of importance to the forensic analyst?

Samsung Keyboard Analysis

On Samsung Android devices data related to keyboard configurations reside in the data/data/com.sec.android.inputmethod directory. The following image shows the contents of the aforementioned directory.

Database folder exist when a blacklisted word exists

Notice in the image above how the databases folder is highlighted. This directory did not exist until I added a word to be excluded on the Samsung keyboard. Within this directory resides the SQLite database named RemoveListManager. Within the database the RemovedList table keeps the excluded word list.


In the previous image the word EMBASSIES was excluded. Notice the added time. While doing testing the actual time of addition was 2019-06-11 09:13:51. There is a difference of 4 hours. It is my assumption that the date shown is UTC time in human readable format. This underlines how important it is to test your conclusions. On your case work it is key to duplicate the environment by getting a similar phone to the original and do your own testing. 

Samsung SwiftKey Analysis

Just like the Samsung Keyboard, the SwiftKey keyboard app keeps pertinent data in the data/data/com.sec.android.inputmethod directory. 


As seen in the previous image the user directory is where the excluded word list resides. The following image shows the contents of the blacklist text file.

For this analysis the creation and modified dates can be used to show when the list was created first and modified last. Dates for words excluded that are between the first and last entries on the list lack a timestamp or a way to infer it.

Why does this matter?

Excluded words are voluntary user generated events. When a user decides to exclude a word it is because it constantly gives suggestions to a list of words that are constantly typed. What would a list of excluded words that mostly contains terms related to child exploitation tell the digital forensic analyst? What can the analyst infer the user was typing? When was the exclusion made? Can the timestamp be correlated to a location by the use of another type of artifact on the device? User generated events tend to have relevance to our analysis and should be sought out and aggregated. We can make out the forest by getting at all those trees.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com. 

Monday, June 3, 2019

Finding Badoo chats in Android using SQL queries and the MAGNET App Simulator

Short version

The Badoo Free Chat and Dating app keeps user generated chats in the following SQLite database:
userdata/data/com.badoo.mobile/databases/ChatComDatabase
The following queries at https://github.com/abrignoni/DFIR-SQL-Query-Repo can be used as templates to extract chats from the Badoo database:

  • Messages
    • Sender name, recipient name, chat message, create time, modified time, server status, payload.
  • User data
    • User ID, username, gender, age, user image url, photo url, max unanswered messages, sending multimedia enabled, user deleted.
By using the MAGNET App Simulator the messages can be easily seen in their native format. The simulator can be downloaded from here:
https://www.magnetforensics.com/resources/magnet-app-simulator/
Long version

The Badoo application is a chat and dating platform for Android and iOS. The app website claims a to have over 425,000,000 users and counting.

Large install base
The app seem to be fairly popular in the Google Play store with over 4 million reviews.


The following analysis came to be due to a request from a digital forensics examiner not being able to parse the app data using commercial mobile forensic tools. I procured consent from my colleague to use the data sets in the creation of the queries and accompanying blog post. With that being said I will obscure usernames and chat content in the data sets due to the fact that they are in French, which I do not speak, and I want to avoid publishing something without knowing what it says.

Analysis via SQL queries

The data is kept in the SQLite ChatComDatabase file located in the userdata/data/com.badoo.mobile/databases/ directory. Within the database there are 2 tables containing data of interest.

Conversation_info
This table contains the user IDs, gender, user names, age and profile photo URLs for all the users that chatted with the local Badoo app user. It is of note that the local app user information is not contained within this table. To identify the local user information I emulated the app with the Magnet App Simulator (more on that later) and was able to see the name and age of the local user.

Username obscured
With that information on hand I processed the app directory with Autopsy and did a text search for the user name which had a hit in the following path and filename:
userdata/data/com.badoo.mobile/files/c2V0dGluZ3M=
Note the base64 formatted filename. Using Cyberchef  it was easy to convert the base64 filename to ASCII as seen in the next image.


By looking at the contents of the settings file with Autopsy the following data can be obtained regarding the local user:

  • Username
  • Birth date
  • Telephone numbers
  • Weight & height
  • Body type
  • Workplace
  • Sexual orientation
  • Political orientation

It is of note that this user generated data surely would vary depending how much the user adds to their profile. Further testing would be required to confirm.

Regarding the user data of individuals that exchanged messages with the local user the User data query can be used to get the following column values as seen in the next image.



Messages
This table contains the user IDs, timestamps, and chat messages. The chat messages are contained in a field labeled as payload that holds them in JSON format. It is really easy to extract them using SQLite's the json_extract function. For an example on how to use the json_extract function see the following post on Slack app messages parsing:
https://abrignoni.blogspot.com/2018/09/finding-slack-messages-in-android-and.html
 Since the messages are referenced by their user IDs a join select of the messages and conversation_info tables had to be used to determine the sender and recipient names. To do this the select query had to take into account that the local user information was not found within the conversation_info table. This fact made it difficult to join the tables by user_ids since the most important user (the local user) did not have user name data to join. To overcome that obstacle I used two separate query conditions.

  1. Left join conversation info on sender_id = user_id
    This condition gave me all sender user names to include null rows that had data but no corresponding user name (i.e. the rows for the messages sent by the local user.)
  2. Left join conversation info recipient_id = user_id
    This condition gave me all recipient user names to include null rows that had data but no corresponding user name (i.e. the rows for the messages received by the local user.)
With these two queries on hand the idea was to join both selects by each row's unique ID. This would guarantee that there wouldn't be a one to multiple selection which would cause rows to be unnecessarily repeated. Then a simple order by created time would put all the messages in their proper order. I have also added a ifnull condition to the query so that every null username value will read 'local user' instead. The query and the result looks as follows:

To see the full query see the previously provided link

It is of note that I have added the payload data field with all the JSON content in it. This was important since some of the JSON content might not be a chat message but data regarding a shared image. When the chat_text field is null in the query results the examiner can simply go to the contents of the payload field to determine additional information like upload ID, expiration timestamp and the URL of the image itself. In the preceding image notice how the chat_text null field rows say "type":"permanent_image" in the payload field.

I plan to have these queries submitted to the MAGNET Artifact Exchange Portal soon.

MAGNET App Simulator
Main screen
As stated previously I used the simulator to identify local user data by visualizing the app data through the app itself. The process is simple and straight forward. 

The first thing to do is extract the app APK from the device.

Load the APK

Then load the app directory.
Load app directory

The simulator brings up an Android instance within VirtualBox, installs the APK, and injects the app data into this new virtualized app instance.

Installing, importing, & injecting

The results are incredible.
Chats in viewed with the app itself, as intended

Conclusion
This analysis was interesting to me for a couple of reasons. The first one underlines the importance of always doing a manual visual check of what apps are available in our extractions and of these how many are parsed by our tools. The difference requires manual attention since the most important piece of data might reside where is not to be found readily. The second reason is how simulation or virtualization of apps does not substitute manual database analysis and that both techniques can and should be used together to guide a deeper analysis of the application data. Without the combination of both techniques the rich repository of local user data might have gone unnoticed since it wasn't accessible in the databases nor in the virtualized screens.

To end I would like to thank not only those who contribute to the DFIR field with tools, scripts and data sets but also those who reach out to ask questions because they want to learn and grow. Truly there is no better way to learn that by trying to fill the gaps of things yet to be known.

As always I can be reached on Twitter @AlexisBrignoni and email 4n6[at]abrignoni[dot]com.