Saturday, November 10, 2018

Finding TikTok messages in iOS

Short version

The iOS TikTok app keeps message related data the TIMMessageORM table from the following SQLite database:
/private/var/mobile/Containers/Data/Application/UU-ID/Library/Application Support/ChatFiles/User-ID/db.sqlite
Be aware that in the path UU-ID should be replaced by the application identifier and the User-ID  for TikTok user identifier of interest. The process to obtain the correct UU-ID number is presented in brief in the long version section. For a detailed example see here. To obtain the correct User-ID directory number for a user of interest see the contents of the awemecontacts table from the following SQLite database:
/private/var/mobile/Containers/Data/Application/UU-ID/Documents/AwemeIM.db
Queries that can be used as templates to extract messages from the database can be found at:
https://github.com/abrignoni/DFIR-SQL-Query-Repo/tree/master/iOS/TIKTOK
In order to view the public TikTok profiles of the users found in the awemecontacts table in the AwemeIM.db database add the user name ID to the end of the following URL:
https://m.tiktok.com/h5/share/usr/(insert username ID number from DB).html
For one of the test accounts used in this blog post the URL looks like this:
https://m.tiktok.com/h5/share/usr/6619782258185388037.html 
Videos created with the app can be found in the following directory with the .mp4 extension:
/private/var/mobile/Containers/Data/Application/UU-ID/temp/
 Long version

TikTok is one of the most popular apps in the iOS App Store.

For an example on how to obtain a file system extraction from a rooted iOS device see here.

Testing Platform

For analysis I am using the following device and equipment:
  • iPhone SE - A1662
  • iOS 11.2.1
  • Jailbroken - Electra
  • Forensic workstation with Windows 10 and SSH software.
Acquisition

A brief overview of the identification and extraction of  TikTop user data app directory is as follows. For an detailed example see here.

1. Locate bundle id name.


2. Access the 'applicationState.db' file located at:
/private/var/mobile/Library/FrontBoard/ 

This SQLite database provided the connection between the bundle id and the UUID numbers in the 'Application' directory.

Open the SQLite database with a SQLite browser. Look for the bundle id name in the 'application_identifier_tab' table. Take note of the corresponding id number.


3. Look for it in the 'kvs' table in the 'application_identifier' field . Export the blob in the value field for the id. The exported data is a bplist that maps all pertinent UUID numbers to the application name and/or bundle id. The data can also be seen in the preview pane in binary mode without the need to export the blob content. If the bplist is exported a viewer, like Sanderson Forensics BPlister, can be used to see the relationship between UUID and application we are looking for.


4. With the correct application directory identified I copied it via SSH to the forensic workstation.


In this particular instance I did not make a full file system extraction of the device. I only copied the app directory of interest for testing purposes. Do follow generally accepted forensic principles when doing similar work on your case work.

Chats and Media

As stated in the Short version portion of the blog post the message data can be accessed by joining the contents of two different databases and two tables in them.

1. TIMMessageORM.db.sqlite
2. awemecontacts.AwemeIM.db

The paths for these files are respectively:

1. Support/ChatFiles/User-ID/db.sqlite
2. /private/var/mobile/Containers/Data/Application/UU-ID/Documents/AwemeIM.db

The messages query found in the DFIR SQL Query Repo for TikTok produces the following results:


The column data is as follows:
1. sender = The numeric user id. The value is used to join the two tables in order to access usernames.

2. profilepicURL = Is the link for the user profile pic.

3. customID = Account username.

4. nickname = Precisely what it says.

5. Local_create_Time = Local/device time for a particular message.

6. servercreatedat = Server/remote time for a particular message. A value of zero indicates the message did not leave the device.

7. message = The content of the message.

8. localresponse = Additional information for a particular message. For example for messages that did not leave the device this field will provide some diagnostic information.

9. links_display_name = If the user responds with an image or a gif this field will have the display name of the file.

10. links_gif_url = The link for the sent image of gif. Contents can be accessed without authentication.

The user data query found in the DFIR SQL Query Repo for TikTok produces the following results:


The column data is as follows:
1. uid = Numeric user id. 

2. customID = Account username.

3. nickname = Precisely what it says.

4. latestchattimestamp = Last timestamp for a chat.

5. url1 = Link for the profile pic of the user.

One can use the uid number to access the public profile of the user over a browser. Just user the following URL and fill it with the uid of interest.
https://m.tiktok.com/h5/share/usr/(insert username ID number from DB).html
Here is one of my test account profiles as an example:

 All public shared videos can be seen in the profile.

Videos created with the app can be found in the following directory with the .mp4 extension:
/private/var/mobile/Containers/Data/Application/UU-ID/temp/
Conclusion

The TikTop app, both in Android and iOS, stores JSON data within SQLite databases. Currently I don't know of any mayor forensic tool vendor that has the json_extract function enabled in their SQLite implementations. This means that queries that can handle JSON data can't be incorporated into their artifact/template generation tools except via the use of more complex JSON handling python scripts.

For the time being exporting the databases of interest and executing SQL queries on them via a third party SQLite browser tool will be my preferred choice.

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.

Friday, November 9, 2018

Finding TikTok messages in Android

Short version

The Android TikTok app keeps message related data in SQLite databases located in the following location:
userdata/data/com.zhiliaoapp.musically/databases/
The database containing user data, both the local user and friends, is named db_im.xx.
The database containing the messages is named in the following regex format: ([0-9]{19})(_im.db)$ where the filename is 19 character numeric sequence ending in the _im.db extension.

Queries that can be used as templates to extract messages from the database can be found at:
github.com/abrignoni/DFIR-SQL-Query-Repo/tree/master/Android/TIKTOK


In order to view the public TikTok profiles of the users found in the db_im.xx table add the user name to the end of the following URL:
https://m.tiktok.com/h5/share/usr/(insert username number from DB).html
For one of the test accounts used in this blog post the URL looks like this:
https://m.tiktok.com/h5/share/usr/6619791930123403269.html 
Multiple XML app files can be located at:
userdata/data/com.zhiliaoapp.musically/shared_prefs
Some of the app related info contained within the XML files includes:

  • Total traffic
  • Collect traffic time
  • Recent search history 
  • Mobile traffic 
  • Language
  • Region
  • First open time
  • App install time
  • Last update time
  • Mac address
  • Last wifi bssid
  • Last time check bssid
The previous are just a few examples of the type of content the XML stores. Additional user info can be found in the aweme_user.xml file.

Videos created with the app can be found in three files. One contains the video, another the audio for it and a third one combines both. The files are located at:
userdata/data/com.zhiliaoapp.musically/files/
The filenames are a dash separated timestamp followed by a numeric sequence that ends with -concat-v for the video and -concat-a for the audio. A sample audio filename would be something like this: 2018-11-03-210557702-mix-concat-a.

For the combined video and audio file the filename will follow the previous format with the addition of the synthetise_ prefix. For example: synthetise_2018-11-03-210556218-concat-v.


Long version

The Android TikTok app is one of the more popular apps in the Google Play store with over 100,000,000 downloads. The app is used to create short videos where the user can easily edit the sounds, visuals, and share them in within the social media environment it provides.

This app is ridiculously popular with teens.
 As a most social media platforms the app provides a way for user to send messages to each other.


Testing platform

Details of hardware and software used can be found here.

Extraction and processing

Magnet Forensics Acquire - Full physical extraction. 
Autopsy 4.8.0
DB Browser for SQLite

Analysis

The TikTok app directory structure looks as follows:

Usual Android app file structure.
As stated at the beginning of the post the main messaging content SQLite database is named by the following pattern  ([0-9]{19})(_im.db)$. The 19 character number at the start of the file name is the same as the logged in user of the app. The messages table does not contain the actual user names, that information resides in a second table called db_im.xx. The table name for the message is appropriately called msg.

The following image shows some of the more relevant fields in the msg table:

JSON, we meet again!!!!
As expected the creation time of the messages is unix epoch and the actual test content is in JSON format. The extract messages query at the top of the blog uses the json_extract function to separate the relevant JSON into its own database response columns.

It is of not that some of the messages have the read_status value in the last column set to zero. This means that the message did not reached the server. In my test those messages were sent before the target account had followed account initiating the message. The local info column contained relevant information that will help the analyst understand the reason for a read_status of zero. Again in this instance the local info message read as "This person hasn't followed you yet and may not be able to receive your messages."

Local_info column value. Long hand for sorry can't do that Dave.
Next is the user data table contents named SIMPLE_USER in the db_im.xx database.

Notice the user id, nickname and unique_id values. The avatar thumb url, in JSON format, is there as well. The SQL query for messages joins both the messages and user data tables to present a unified result for all messages sent and received. As always it calculates the time from unix epoch to local time and extract all the relevant JSON to its own result columns. For the query to work one of the tables needs to be attached so the query can have access to both databases and the necessary tables from each.

The next image show a portion of the message extract messages query results. See how if the user responds with a GIF the URL and display name are extracted from the JSON and given their own columns. These URLs are accessible without authentication.

Love it when a plan comes together.
One useful trick is to take the UID values and insert them into a specific URL in the following manner:
https://m.tiktok.com/h5/share/usr/(insert username number from DB).html
 If the account is not private you will be able to see the shared content. Here is an example:

This is the account.

The app also maintains the shared content as described in the first section of the blog post as well as multiple XML files that can be of use to the analyst. In my sample data set there were 77 XML files. Going through them here is beyond scope but it is highly recommended to take the time and understand their content.

Next post will be about finding TikTok messages in iOS. When those queries are completed they will be added to the DFRI SQL Query Repo as well.

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.



Thursday, November 1, 2018

Quick DFIR review - CCleaner for Android

Short version

CCleaner for Android keeps application usage related data in the following location:
data/data/com.piriform.ccleaner/databases/
 The following queries at https://github.com/abrignoni/DFIR-SQL-Query-Repo can be used as templates to extract information from the CCleaner SQLite databases.
CCleaner installation and first usage timestamps can be found in the following XML file:
com.google.android.gms.measurement.prefs.xml
The XML file is located at:
data/com.piriform.ccleaner/shared_prefs/ 
Long version

Mobile extractions can provide different degrees of visibility into what type of artifacts can be gleaned from the device under examination. For example in most instances a physical dump will enable us to get at more artifacts than an Android backup extraction would. In situations where the extraction at hand provides limited visibility we can look at extracted apps to complement our view.  

Testing platform

For my testing platform see here.

Analysis

CCleaner for Android is a well known app with over 50 million downloads. 



The app is advertised as a simple way of reclaiming storage space and speeding up the device.

When the app is accessed, and the Analyze button is pressed, it presents bar graphic stating that a scan is in progress. When the bar gets completely filled a list of apps and related directories that can cleaned by emptying their corresponding cache folders is shown. It looks as so:

Apps after scanning
The list is divided in the following categories:

  • Hidden cache
  • Visible cache
  • Empty folders
  • Downloads
When the FINISH CLEANING button at the end is pressed the app presents how much space was saved.

Saved space.
Running the scan a second time the app list is shorter than the first go around.

Fewer apps.
By looking at the data/data/com.piriform.ccleaner/databases/ directory one sees that the database which contains the scanned app data is properly called scanner-cache.db. This database lists apps that have data in the /data/data/ and related /data/media/ directories. When a scan is responsive to any of those directories the package name and the timestamp is added to the scanner-cache.db in the appInfoCache table. Based on my testing the timestamps in the database will correspond to the latest scan of the apps and directories, hence why all the timestamps are separated at most by milliseconds. Is there an exception to the rule?

In the following image I sorted the contents of the database by timestamp. Notice the first entry.

Odd man out.

Notice the scan time in the image. The Flud application was not available on my sample Android backup extraction. I would have not known the device had the application installed at some point since my extraction was not able to pull the usual files used to determine installed applications (like packages.xml). The only way I found to replicate such behavior in CCleaner is to deleted an app with the Android uninstall function, as opposed to using CCleaner itself to make the uninstall, while also making sure that the deleted app did not have any associated directories in the emulated storage directory.

Odd man out.
What does this tell us? Not only does the list tells us what installed applications were at some point scanned by CCleaner but it can also gives us a time range of when the odd man out app, scan time wise, was deleted. In the first image the Flud app was deleted at some point between 2018-09-24 08:00:00 and 2018-10-10 17:51:33. In the second image the Translator app was deleted at some point between 2018-10-30 17:30:04 and 2018-10-30 21:32:57.

With this information the next steps would be to see if any other artifacts for Flud or Translator exist on the extractions under examination. For example in the case of Flud you might want to start looking for media player history files that include the application path for Flud downloads. As a popular BitTorrent client many individuals use it to download multimedia files. A media player that accesses such paths might retain the paths and filenames of media viewed from Flud directories even if those directories are not present in the extraction.  

Can apps that have never been downloaded and/or installed be on this list? I have not seen this to be the case for obvious reasons. Can deleted apps still be found on the list that have a last/current scan time? Yes! And this is why testing and validation is so important.  To assume that all the apps in the list were present/installed at the most current scan time could lead us to make an unsupported statement. My current hypothesis on why deleted apps might still have the most current scan times resides on the fact that these apps have related directories in emulated storage or in another location on the device. My limited tests seem to support the hypothesis but further testing is needed to confirm.

Another CCleaner file of interest is the com.piriform.cleaner.db database. It contains automated scheduled cleans.

Schedule your cleans.
I have not found this database in a free CCleaner app installation.

The cleaner_apps_db contents seem to match a list of installed apps. I have seen deleted apps persist in this database.


A list of CCleaner usage stats can be found in the cleaner database. 
Usage stats
Items that start with SAFEC are generated when an Analyze and Clean option is selected. Items that start with ADVC_CLEARED are generated when a Analyze and Clean Memory option is selected. The statistics are grouped by day and do not contain details on what was deleted.

A list of media filenames in emulated storage can be found in the myroll database.

Media list in emulated storage. Notice the was_deleted column.
Installation and first use CCleaner timestamps can be found in the com.google.android.gms.measurement.prefs.xml file located at data/com.piriform.ccleaner/shared_prefs/.

Conclusion
 

Third-party apps can augment our visibility into device artifacts when we are working with extractions that have limited file or system information. Testing and validation keep being of paramount importance since any conclusions we reach need to be supported by evidence and not only hunch or opinion. 

Since information that is not shared is information that is lost, these SQL queries are now accessible at the DFIR SQL query repository.

https://github.com/abrignoni/DFIR-SQL-Query-Repo/tree/master/Android/CCLEANER

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.



Monday, October 22, 2018

DFIR tool review: Cellebrite Virtual Analyzer


Virtual Analyzer (VA) is an Android emulator that works within Cellebrite Physical Analyzer (PA). It provides a convenient way to look at data logically (no deleted/recovered items) as the user would have seen it through the app itself. This blog post will briefly detail my initial interactions and reactions regarding the use of the tool.

Installation notes will be at the end of the post.

Testing
For this test I used one of my Android physical extractions that contained some sample data used for previous app reviews in this blog. I opened PA and pointed it to the physical extraction and let it parse. When it was done the VA option was available under the tools menu.

After the initial splash screen shown above, the tool lets you choose up to 5 apps to emulate at one time. For my initial run I selected the following 3 apps: Slack, LA Fitness Mobile and Microsoft Translator.

Select your apps.
The interface was clean and consistent with the PA screens I am already used to. After selection the next screen shows how much data will the emulator work with per app.
Notice the 'good to know.'
Processing took a little bit of time but even so it was still way, way, way quicker than doing a similar analysis by 'hand' as I have done here.


As VA starts you will see a progress bar show up on the lower right corner of the screen. When completed a window opens that has the emulated system ready for user interaction.


The selected apps will show up on the home screen and one has only to click them as one would in any Android system. By looking at the setting I noticed that the emulation was done via the use of the Andy Android emulator.


Going back to the home screen I interacted with the apps. All three worked and presented to me the data I expected to see it based on my previous non-emulated analysis. Here is how it looked per app. A link to non-emulated analysis is provided as a comparison.

Microsoft Translator

Emulated analysis image for Microsoft Translator:
All the options in the app worked.
LA Fitness Mobile

Emulated analysis image for LA Fitness mobile:

In this instance looking at check-in data through the emulated app was harder and slower than looking at it when extracted directly since it requires no screen manipulation. Still the analysis validation provided by this tool is immense. A long excel like report with dates and data can be validated by looking at a portion of it through the emulated app itself.

Slack

Emulated analysis image for Slack:


I really liked being able to see the images properly placed within their corresponding Slack messages. I have yet to find a way to do such a image placement manually by only looking at the content of the databases themselves. Emulation solves this problem.

To finish my testing I looked at two more apps, Discord and Nike Run. Discord behaved as the previous apps, the emulator showed me the content as expected.

Discord

Emulated analysis image for Discord:

As Discord seems to be showing up more and more in colleagues' case work being able to access it in this way is invaluable. In some instances emulation might be the only way of presenting the data absent an actual manipulation and screen recording of the device.

Nike Run

Emulated analysis image for Nike Run:

Womp, womp.
The emulation for the Nike app kept crashing and did not work. Using PA I looked at the database and the required data was there.

Time, GPS, speed, etc...
In this instance a database extraction paired with a custom SQL query was the ideal analysis method. It seems the app itself requires a connection to an external server before it initiates. VA, being a forensic tool, does not allow such a thing to happen. To permit the host system that runs PA/VA and the emulator to be connected to the Internet (they shouldn't be as a matter of course) can bring up issues regarding the illegal search of a remote system. Always be aware what you do, how you are doing it and under what authority you are operating. Such things cannot be overlooked.

Conclusion 
There is no ultimate one-in-all forensic tool and most likely there will never be one. As examiners we need to exercise proper judgment on how we go about our work and what will the ideal tools be for a particular case or work scenario. It seems to me that Virtual Analyzer is poised to become one of those essential tools in our forensic toolbox. 

If we could only have something similar for iOS....

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.

Installation Notes
Installation was easy but required a little tinkering. After downloading the executable from the Cellebrite portal I tried to, mistakenly, install it in a system that had a previous PA version than the current and required 7.10. Instead of getting an error of 'update to current version' I got the following:

Fair enough, it was my fault initially...
After noticing my mistake I upgraded to the latest version and the software almost installed. After some emails with support I found out that the Android emulator requires VMware Player to run. The error was caused due to my previously installed version of VMware Workstation 12.5.9 not being compatible. Currently VA works with VMware Player versions 12 to 14 and developers are currently working on a solution to have VA work with the latest release, version 15, of VMware Player. Since I couldn't uninstall Workstation 12 for reasons I moved to another system and did a clean installed of all the required software. This was my procedure:

  1. Download and install VMware Player version 14.
  2. Download and install PA.
  3. Download and install VA.

Quick DFIR review - LA Fitness Android app

Short version

The LA Fitness app for Android is an example of how a non messaging application can have plenty of relevant data if only one cares to look.
  • Gym check-ins: club name, date, time and location. Sample data covered a year's worth of check-ins.
  • Pin code for the app.
  • Billing information to include the last 4 digits of the credit card in use.
  • Registration information: membership start date, name, address, phone, email, home club.

Long version

LA Fitness is a gym club with more than 700 locations across the United States and Canada. The Android app has more than a million downloads.

LA Fitness in the Google Play Store
A lot of case work involves finding GPS coordinates, message content and media of interest. The app does not seem to have anything to do with the previous and still as a user I've notice that some of the information it shows me could be of relevance in a case. Is there information that it does not show me? What could it be?

As always I've tried to research this topic using open source tools.

Testing platform

Details of hardware and software used can be found here.

Extraction and processing

Magnet Forensics Acquire - Full physical extraction. 
Autopsy 4.8.0

Analysis

The LA Fitness app data folders can be found here:
/userdata/data/com.lafitness.lafitness/

Interestingly enough the relevant information resided in the files folder, not in databases. A look at the contents of databases folder shows that most SQLite files seem to manage data related to multimedia files and menus used by the app. None seem to track any user generated requests or actions. More testing needed of course.

From the files directory I identified four files containing user related content.

1. Gym check-ins
/userdata/data/com.lafitness.lafitness/files/MembershipCheckins
Data is formatted as:

GYM-NAME - Street address
XX/XX/XXXX 00:00:00 AM/PM

The GYM-NAME is the same as the city where the facility is located.
In my sample data the check-ins go back as far as a year.

2. Pin code for app access 
/userdata/data/com.lafitness.lafitness/files/4-digitpin
Used for a screen app lock. Cleartext file.

3. Billing information
/userdata/data/com.lafitness.lafitness/files/AccountProfile

It is of note that if the billing name can be different from the name on top. It also keeps the last 4 digits of the credit card number.

4. Customer information
/userdata/data/com.lafitness.lafitness/files/AccountProfile
Same fields as billing information with the addition of a few extra fields regarding the type of access the client has to gym services like personal training, guest passes and premium status.

Conclusion

From the Android third party video player database that contained proof of file access to the gym application that had the pin code needed to access a previously out of reach data store, new data can be found when and where we least expect it. As examiners it is true that our time is short and the cases plenty but to be at our most effective we need to think more about how we can streamline our workflows. Lately I've been getting great results from parsing my mobile extractions with Autopsy for fast index searching within SQLite databases while making it a habit of going through the databases section in Physical Analyzer looking at non-decoded data stores. As part of this streamlining of processes I will review in the next blog post the new Virtual Analyzer by Cellebrite.

 As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.

Wednesday, October 17, 2018

Update to Android Discord app "missing" values

On 8/16/17 I posted a brief explanation on how to decode the last value in a line of chat text from the Discord app for Android.

https://abrignoni.blogspot.com/2017/08/discord-app-missing-values-not-missing.html

Today I had an email exchange with Patrick Mooney on why the app might behave the way it does.
I believe what is actually happening is that Discord is not translating the character set but rather is setting the most significant bit on these fields to annotate that it is the end of the field. In the data I've looked at the last characters are 'translated' up a constant value of 0x80 which is actually a simple setting of that most significant 8-bit value. ASCII doesnt use that most significant bit so it makes it a natural delimiter that also does not require the use of additional characters or storing field lengths & counting characters.

Good stuff. As more folks encounter Discord in their case work I am having the pleasure of hearing from them. Most times I give additional background regarding my posts on the topic but in this case I learned something new which I get to share. Thanks Patrick!

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.



Sunday, October 14, 2018

Github repository for SQL queries used in digital forensics

As I started to share some of the queries I use in my analysis of different apps I noticed how much screen space these take in a blog post. If the analysis requires more than one query or one query that joins many tables with many relevant fields the whole thing is pretty much unreadable when written as part of the regular text of a post. To solve this I made the following repo:

https://github.com/abrignoni/DFIR-SQL-Query-Repo

The idea is to house those SQL queries by platform and application. The readme of these queries will have an accompanying explanation on usage or a link to a blog post that does as such.  This has the benefit of making it easier to update the queries without having to go back and find the original blog post. It is also easier to search for a particular query per application.

When I write a blog post I just have a link to the proper location in the repo where the reader can go and look at the query. Github is awesome cause it does reserved words highlighting on its own. Super easy to read and no more query clutter in the post.

Only mobile apps for now.

If anyone wants to contribute some of their queries please do so. If anyone knows of a more widely known way of sharing DFIR related SQL queries please let me know. Be it in my repo or somewhere else the idea is to make these queries available and have them be easy to search and maintain.

As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.


Finding Slack app messages in iOS

Short version

The Slack app for iOS keeps message related data in the following database location.Be aware that in the path *UU-ID* should be replaced by the application identifier and *DB-ID*  for the Slack work-space identifier of interest.

/private/var/mobile/Containers/Data/Application/*UU-ID*/Library/Application Support/Slack/*DB-ID*/Database/main_db
Details about these IDs are found below in the long version section of the blog post.

The following queries at https://github.com/abrignoni can be used as templates to extract relevant information from the database.
The following database is a repository of CFURL links, some in bplist format, in use by the application.
/private/var/mobile/Containers/Data/Application/*UU-ID*/Library/Library/Caches/com.tinyspeck.chatlyio/Cache.db 
Within the cfurl_cache_blob_data table, in one of the request_object fields, the Slack login/username and password for the user of the application was located in clear text. Details in the long version of the blog post below.

Images in use by the application, including pictures send as attachments as well as avatars, can be found in the following location:
/private/var/mobile/Containers/Data/Application/*UU-ID*/Library/Application Support/Library/Caches/default/com.hackemist.SDWebImageCache.default/
Currently I know of no way to reverse the file names to their original form as found by the Files & attachments query.

Long version

Slack is one of the more successful corporate/work messaging apps in the world with over 8 million daily active users and 4 million paid users.

If you haven't seen it in your cases you will soon.
Slack for iOS is the mobile version for Apple products. In this blog post we will examine where are the messages located, how can they extracted, and what other items of interest can be found in the app directories.

Extraction

For a detailed explanation on how to extract user data file system level information from a jail-broken device see the following post. The steps, even though centered on another app, will be applicable to most if not all current iOS applications. Here is the abridged version for my Slack app analysis. Not to belabor the point but if the short explanation that follows does not make sense please see the full example in the following post.

0. Looked up the bundle id for Slack here.

Would have never guessed the id.

1. Connected to device using SSH.

2. Made a local copy of the  applicationState.db located at /private/var/mobile/Library/FrontBoard/


3. Located the Slack id number in the application_identifier_tab table in the aforementioned database.

4. Used the application_identifier number to look up the value blobs in the kvs table. Exported the blob which contains a bplist. The contents can be seen by using the hex viewer function in your SQLite viewer tool of choice or after exporting.

5. Open the extracted bplist with a bplist viewer. The Slack directory UU-ID can be seen.
The long identifier in the picture will be different from the one in your device.

6. Copied the Slack application folder as identified in the bplist.
Notice the long ID number used as the directory name.

The copied folder contains the relevant files for our objective.

Analysis of message related databases

The directory structure of the Slack application is as follows:


Under the /Library/Application Support/Slack/ are the team id numbers. For my testing team the number is TCJRXQDB1. In every team id folder there will be the relevant database (/Database/main_db) to that team. In Android the actual database for the app has the id as its name.

Using a SQLite viewer one can see that the main_db database has 21 tables.



For my analysis I formulated five queries that identify what I believe to be pertinent information in most cases. It is of note that these queries are only templates. Each table has multiple fields and some of these might be relevant to your case and not be selected in the queries below.
Sample of query output.

Some notes on the values encountered for the extracted messages query:

1. User IDs start with the letter U, Channel IDs start with the letter C and Direct Message channel IDs start with the letter D. The linking of user messages to channels and channel metadata took some work since public channel ids, like General or Random, reside in a column of their own in one of the tables. Hence the need for the OR condition in the SELECT clause.

2. The ZLSKMESSAGE table had repeated rows in it. Not sure why. I used a distinct clause at the beginning to clean it up. Be aware that you should look at the data manually first and then try the query with the DISTINCT clause and also without it. I believe it is important to have a clear understanding of the data and the state it was found.

3. The query sets the time to local time. Adjust as needed for your purposes.

4. The ZFILEIDS indicates that the message was sent with a file attached to it. If sent the field will have a value in it. The actual sent data might reside in the ZSLKFILE table. If it is a picture it will not. The picture will be located in the following folder:
/private/var/mobile/Containers/Data/Application/*UU-ID*/Library/Application Support/Library/Caches/default/com.hackemist.SDWebImageCache.default/
The filenames bare no relation to the ones found in the ZSLKFILE table. Currently I know of no way to reverse the filenames in the cache to the original filenames.
Sample of query output.
Some notes on the values encountered for the user data query:

1. Notice the ZTEAMID value. It is the current working team. In iOS it is used to name the directory for the team database. In Android it is how the database itself is named. Additional relevant data will be discussed in the channels data query portion below.

2. The ZADMIN and ZOWNER identify the user/s that wields such designations. Boolean value.
Sample of query output.
Some notes on the values encountered for the files & attachments query:

1. If the content of the message attachment is available it will reside in the ZPREVIEW field. As explained previously images themselves are not found in the table. They reside in another directory named in a way that has no obvious relation to the original filename as stated in the query.

2. If proper consent/legal authorization is obtained the images can be downloaded from the server using the permalink URL. Validation via username and password will be needed. More on login username and password below.
Sample of query output.
Some notes on the values encountered for the work-space data query:

1. The ZDOMAIN value is the one used to name and connect to the work-space. In this instance it would be dfirtesting.slack.com. Internally the work-space is identified by the ID located in the ZTSID field.
Sample of query output.
Some notes on the values encountered for the channels data query:

1. There are tons of interesting timestamps in this table. Among them the time the channel was created, when was a topic and/or purpose for the channel set, the last message in the channel and the last read of the channel.

2. The purpose and topic string is found under ZPURPOSETEXT and ZTOPICTEXT respectively.

3. The ZSHAREDTEAMIDS field has the current work-space listed as a list of one item. Haven't tested the shared teams functionality but my educated guess is that the list would be populated with the respective IDs from other work-spaces. This will require further testing.

Analysis of non-message related databases

The following database is a repository of CFURL links, some in bplist format, in use by the application.
/private/var/mobile/Containers/Data/Application/*UU-ID*/Library/Library/Caches/com.tinyspeck.chatlyio/Cache.db 
    Within the cfurl_cache_blob_data table, in one of the request_object fields, the Slack login/username and password for the user of the application was located in clear text. See the image.

    Username, password and work-space ID.
    The content of this blob field starts as a bplist. Exporting the content and using a bplist viewer did not show me the username and password. Using the hex viewer to look at the contents did. One way of finding this data fairly quickly is by indexing your case with Autopsy (or any other forensics suite) and looking for __CFURLRequestNullTokenString__. The username and password will be fairly close to it.

    Username, password and work-space ID.

    As always these databases might contain deleted data in their respective WAL files. Use forensic SQLite tools to take that into account.

    As always I can be reached on twitter @alexisbrignoni and email 4n6[at]abrignoni[dot]com.