Blade™ v1.9 Released - AFF® Support, Hiberfile.sys Conversion and New Evaluation Version

Digital Detective Software - Blade Professional - Forensic Data Recovery

This release of Blade brings a number of fixes and some great new features.  This is the first release of Blade to have evaluation capabilities which allow the user to test and evaluate our software for 30  days. When Blade is installed on a workstation for the first time (and a valid USB dongle licence is not inserted) the software will function in evaluation mode.

The following list contains a summary of the new features:
  • Support for Advanced Forensic Format (AFF®)
  • Hiberfil.sys converter - supports XP, Vista, Windows 7 32 and 64bit
  • Accurate hiberfil.sys memory mapping, not just Xpress block decompression
  • Hiberfil.sys slack recovery
  • Codepage setting for enhanced multi-language support
  • SQLite database recovery
  • 30  Day evaluation version of Blade Professional
  • New recovery profile parameters for more advanced and accurate data recovery
  • Support for Logicube Forensic Dossier®
  • Support for OMA DRM Content Format for Discrete Media Profile (DCF)
We have also been working on the data recovery engines to make them more efficient and much faster than before. The searching speed has been significantly increased.

轉自 http://blog.digital-detective.co.uk/2012/02/blade-v19-released-aff-support.html

Downloads and Full Release Information

電腦鑑識與鑑識會計

轉自 law

筆者算是國內比較早接觸電腦鑑識(數位鑑識)這個領域,大約在91年間就開始閱讀相關文獻,但是那時候很少資料,連英美國家的電腦鑑識書籍,也非常少,而且概念非常混亂。(那時候差不多是網路剛興起及網路泡沫時代)

為了讓資訊科技發達的我國,也不會與電腦鑑識這個時代潮流脫節,經過努力整理當時各國的書籍與文獻,曾經於93年間出版過一本「電腦鑑識與企業安全」,但是當時這個領域太冷門,出版社後來好像也不知去向,這本書也絕版了。(希望不會是因為本書)

但無論如何,也成就了亞洲應該是第一本有關電腦鑑識的書籍。雖然從現在的角度來看,這本書的內容頗為粗淺,但畢竟是一個小小里程碑......

過了幾年,實務上有了長足的發展,政府部門也在95年間成立了第一座實驗室,又建立了第二座里程碑。

筆者沒有再深入學習進階的電腦鑑識,開始再開發更冷門的數位證據領域,也在98年間寫了一本「圖解數位證據」的著作,將法院判決中的錯誤見解,在清楚又簡單的編排架構下呈現出來。當然這個領域還是很冷門,所以銷售情況依舊慘澹。

不過這類型的書籍本來就不是以銷售為目的。經過幾年,感覺這個領域一直在冰箱中,冷到不行。在此冷到不行的同時,筆者也在100年6月完成了法律博士的學位,主題也就是「數位證據」。

99年間個人資料保護法的通過,電腦鑑識的領域突然熱門了起來。也許是電腦鑑識可以幫助瞭解資料外洩的原因,可以作為企業免責的證明,再加上個人資料保護法的賠償金額過高,迄今每一場相關的研討會都是滿場,其中也幾乎都有一場是有關電腦鑑識的議題。
本來筆者並未有意踏入個人資料這一個領域。
為什麼呢?

理由很簡單,電腦鑑識並非僅與個人資料有關係,電腦鑑識應該是與每個領域都有關聯性,因為每個領域都有可能涉及到數位證據,電腦鑑識就是一種嚴謹的採證與分析數位證據的程序。
但是筆者聽過許多講座之後,發現許多主講者也許是為了資訊產品的行銷,講偏了電腦鑑識的真正意涵,搞得好像個人資料保護法的立法目的,是為了相關資訊產業能賣出產品。

所以,近來筆者開始公告將整合個人資料保護法與電腦鑑識領域,從非商業產品推銷的角度,將正確的知識推廣給想聽的朋友。

目前也在短短的兩個月不到的期間講了六場,全省超過千人聽過,更利用自己的休閒時間,三個月內已經完成「圖解個人資料保護法」的著作,就等101年年中施行細則的通過,即可出版,相信屆時可以作為有需要朋友的參考。

最近有某研究所的教授徵詢筆者意見,希望能在其「鑑識會計」的課程中,向研究生解說一下電腦鑑識的概念。如上圖,電腦鑑識確實為鑑識會計領域中的一小部份,透過電腦鑑識的方式,找到企業營運的問題與弊病,如同前面所述,每個領域都有可能涉及到數位證據。

鑑識會計在100年高考三級中,也首次成為考題,看來繞著電腦鑑識領域打轉的議題,也將會愈來愈多。




WhatsApp Xtract

I don't want to bore you explaining what is WhatsApp . If you have this serious gap, you can fill it here . Forensically speaking, WhatsApp was a very cool app until the last June. After that, someone had decided to add the extension “crypt” to such excellent source of information which was msgstore.db .

This database stores information about contacts and also entire conversations.
But simply opening it with SQLite Browser , you can have some troubles in extracting a single chat session with a desired contact, or in reordering the messages. My last python script wants to overcome these problems, avoiding to deal with complex SQL queries.


轉自 http://blog.digital-forensics.it/2011/12/whatsapp-xtract.html




What WhatsApp doesn't tell you...

It is the 'top' app in the mobile world, almost immediately followed the ' give me your mobile number ' request comes the following question ' Do you have WhatsApp? '. Clearly this application is changing the concept of free SMS messaging.

Alberto warned about insecurity issues in how WhatsApp transmits data in plain text and what this means in shared environments.


Today we have to talk about the inside, the way in which WhatsApp stores and manages its data.
Looking from within the file structure of the application we have two files called msgstore.db and wa.db (locations vary, of course, between Android and iPhone). These files are in SQLite format.

Once we import these files with a tool to browse inside their content (eg SQLite Manager), here comes the first surprise: none of the information contained is encrypted .
Contacts are stored in wa.db and EVERY sent messages are in msgstore.db .


Wait a sec, did I say EVERY?
Absolutely, every sent and received messages are there. And why "EVERY" is in uppercase?, simply because although theoretically WhatsApp give us the opportunity through its graphical interface to delete conversations, the reality is that they still remain in the database ad infinitum.

And the issue is even more fun if we sent or received messages at a time which GPS was enabled, because WhatsApp also stores coordinates in msgstore.db


In the case of Android there are even more important things stored that might be of interest to a forensic investigator - or maybe a jealous boyfriend/girlfriend. Apparently WhatsApp is configured by default with a very 'verbose' level of logging and store, within the directory / files / Logs, files with this appearance:

# pwd
/data/data/com.whatsapp/files/Logs
# ls
whatsapp-2011-06-06.1.log.gz whatsapp-2011-06-09.1.log.gz
whatsapp-2011-06-07.1.log.gz whatsapp.log
whatsapp-2011-06-08.1.log.gz
#

In these files are recorded every XMPP transactions made by the application with a very high verbose (debug) level, with the timestamp of when it receives or sends a message (among other things).

011-06-09 00:47:21.799 xmpp/reader/read/message 346XXXXXXX@s.whatsapp.net 1307XXXXXX-30 0 false false

These files are easily "parseable" to extract the ratio of mobile numbers which has maintained some kind of conversation with us. I created a small script that parses the file and pulls out this list of numbers:

  import re
 import sys


 logfile = sys.argv[1]
 logdata = open(logfile,"r")
 dump = logdata.readlines()

 numerosin = []
 numerosout = []

 for line in dump:

        m = re.search('(?<=xmpp/reader/read/message )\d+', line)

       if m:

                if not numerosin.count(m.group(0)):

                        numerosin.append(m.group(0))


        m = re.search('(?<=xmpp/writer/write/message/receipt )\d+', line)

        if m:

                if not numerosout.count(m.group(0)):

                        numerosout.append(m.group(0))

 print "Messages received from\n"
 print "\n".join(numerosin)
 print "\nMessages sent to\n"
 print "\n".join(numerosout) 

Executing the script, it will ouput the information as follows:

$ python whatsnumbers.py whatsapp-2011-06-08.1.log
Messages received form

34611111111
34622222222

Messages sent to

34611111111
34622222222
 
轉自  http://www.securitybydefault.com/2011/06/what-whatsapp-doesnt-tell-you.html

Interesting Malware in Email Attempt - URL Scanner Links

Last weekend I spent some time with extended family helping confirm for them that their on-line email account got hacked and had been used to send some malware-linking spam emails to users in their contact list.
Yesterday our family email account was on the receiving end of someone -- possibly -- who fell victim to an email account hack as our email address was amongst several others included together receiving the email. I say possibly as none of us recognized the sender’s email address and it wasn’t in any of our address books. Possibly our along with the other’s email addresses had been harvested somehow and this was a fake spamming account. The “show-as” name was definitely non-standard and used some letters that related to that in the subject line.
It was pretty evident to me this was probably a dangerous site to go to, but being curiously-minded, I couldn’t pass up the chance to do some detective work.
The email originated from a yahoo mail account.
The Subject line was baited “ACH Transfer Canceled…” and the display name in the email address contained the letters “NACHA.”
ACH is meant to refer to the “Automated Clearing House” which handled financial transactions in the US overseen by the NACHA.  To most Americans, I’m betting these acronyms mean very little and they would be more taken with a sudden urge to grab some NACHOES instead. Maybe Europeans would be a little more anxious emails purporting to come from ACH and NACHA. I digress.
First thing I looked at was the message header. Lots of goodies there. We can follow the bounce between the yahoo mail sender to our ISP’s email servers. Times/dates of transmission.
Since this was a Yahoo mail account, it appears the header may actually contain the IP address of the the location the mail account was logged into from. This is the first time I have seen this so I need to do more research. The IP associated with this particular email is located in France.
The website IP Address Locator has lots of good tools for locating IP addresses as well as a feature that allows a copy/paste/analyze of email headers.
The content of the email was very thin, a single line with all the text ran together. There is a URL link markup there, however it misses getting all the characters. Hmm.
Toggling between the different modes of viewing email content in Thunderbird reveals odd results. If I look at it in original html mode I see a single line of text with an hyperlink in the middle.
If I view it in simple html most of the text is the same but a few characters are different.
If I view it in plain text, there is nothing showing.
Hovering over the hyperlink displayed shows a URL shortner link. Hmm. Set that aside for a moment.
So I back and look at the full header view again and find this in the message body:
Content-Type: text/html; charset=ISO-8859-5
Content-Transfer-Encoding: base64
Ah! So I copy/paste that large text block that follow that into this base64 online encoder / decoder and get a binary file to download! 
(More regarding content encoding methods here Content-Transfer-Encoding - MSDN, here The Content-Transfer-Encoding Header Field via freesoft.org and here Decoding Internet Attachments - A Tutorial by Michael Santovec.)
Opening that binary file in Notepad++ reveals the html code with the same actual URL embedded.
Guessing here they are using base64 coding for the content to try to get around email scanners.
OK, so let’s check out that URL.
Turns out it is using Google’s own URL shortning service: Google URL Shortener.  More info here. Google URL shortener - Web Search Help
Turns out this is a pretty cool choice from both sides of the security fence. By appending the URL with “.info” at the end of a Goog.le shortened URL we can find out the stats from Goo.gl URL shortener (Google Groups)
This is good from an attacker standpoint as they can easily monitor their success rate on the nibbles of this hook and any “hits” to the actual URL. Researchers can get info as well by monitoring the same info and how fast/long the “click-through” may happen.
h0j5wpnx.2up
Neat isn’t it?
Now that I’ve got the actual long URL that this points to, we can start tossing the URL at some on-line link analysis/scanner tools.
VirusTotal shows both TrendMicro and SCUMWARE.org report the long URL as a Malware/Malicious site.
Quttera reports it as serving up a suspicious javascript content via HTML page code.
Anubis: Analyzing Unknown Binaries provided a deeper review of the URL by capturing Windows system events in a virutal sandbox system. It accesses the Windows registry, mucks with some keys, created a cookie, reads the autoexec.bat file, mods some files and maps dll’s to memory and appears to try to download more stuff. The report is available in HTML, XML, PDF, and TXT formats.  Also, they offer a traffic.pcap file to download so you can examine the network traffic generated and perform any NFA you want to do.  This site/tool rocks from a depth of information standpoint.
urlQuery gives some more report feedback when it is sandboxed. Lots of Java script stuff. Another strong URL analysis reporting site.
Trying it a few more times changing the browser type/java version/flash version gets different results and the URL serving code reflects all kinds of different IP’s each time so that long URL seems to be hosted at a dynamic IP host allowing it to bounce around (serving up HTTP redirects) and serve up the malware code depending on platform from all over the place making it harder to track down the source.
urlQuery actually identified the network traffic code as being detected as Blackhole exploit kit v1.2 HTTP GET request.  Another clue.
I tossed the pcap file I got from Anubis into NETRESEC NetworkMiner. Nothing very interesting but my Microsoft Security Essentials alerted when the HTML page was reassembled by NetworkMiner and quarantined the file. It identified the page code as being Exploit:JS/Blacole.AR. (MS’s way of saying “blackhole” I suppose…)
Here are a series of links regarding these kinds of email spam threats in general as well as Blackhole info in particular as it relates with email spam campaigns, if you are curious.
I doubt this is the last our email inbox will see of these things, but the whole process has been quite fun to follow.
I’ve decided to leave out links/images of the actual email and the header-code/URL (short/long) but have passed it along to a number of security-spam websites in case it is of use.
A long time ago I had a list of URL-testing sites to feed a URL into to see if they were safe or not.  Most seem to have gone away, however the following forums had a number of new ones worth bookmarking. Hat tip to “PROROOTECT” for the legwork!
Here is a combined and cleaned up list based on the collective work there from PROROOTECT in both places and at least one or two I’m tossing in and a few from those lists I removed that seem dead/redirected incorrectly.  PROROOTECT does make a great point that the effectiveness of these vary, so a “bad” URL in one may come back as “clean” in another. So it’s best to run your URL through multiple sources.
Note, these are URL/web-page scanners. They are a bit different than on-line file-scanners/sandboxes used to analyze malware samples. Though a few seem to come pretty darn close with the depth of their reports/analysis.
Not “necessarily” ordered in order of usefulness.
PROROOTECT’s suggestion to use an online URL screenshotting service to capture the displayed URL safely is some good outside the box thinking. Kinda a “look-before-you-leap” thing if all the above items pass OK.
Fun trip if it wasn’t so serious…
--Claus V.
Update: I meant to add this in to the original post but got sidetracked. A recent Digital Forensics Case Leads post has mention of a super-fantastic investigation/forensic report involving anonymous emails. This is must-read material, not just in terms of the investigative methodology but also the way the report was composed and presented. Very clearly done!  I’m keeping a saved copy of the report for future reference; both technically and as a report template. From the post via the link above:
University of Illinois recently released a detailed investigation report (PDF) regarding anonymous emails allegedly sent by its Chief of Staff to the University's Senates Conference. The report is an interesting read, and also serves as a potentially useful model for those looking for report samples and templates.

轉自 http://grandstreamdreams.blogspot.com/2012/01/interesting-malware-in-email-attempt.html

Ripping Volume Shadow Copies – Introduction

Windows XP is the operating system I mostly encounter during my digital forensic work. Over the past year I’ve been seeing more and more systems running Windows 7. 2011 brought with it my first few cases where the corporate systems I examined (at my day job) were all running Windows 7. There was even a more drastic change for the home users I assisted with cleaning malware infections because towards the end of the year all my cases involved Windows 7 systems. I foresee Windows XP slowly becoming a relic as the corporate environments I face start upgrading the clients on their networks to Windows 7. One artifact that will be encountered more frequently in Windows 7 is Volume Shadow Copies (VSCs). VSCs can be a potential gold mine but for them to be useful one must know how to access and parse the data inside them. The Ripping Volume Shadow Copies series is discussing another approach on how to examine VSCs and the data they contain.

What Are Volume Shadow Copies


VSCs are not new to Windows 7 and have actually been around since Windows Server 2003. Others in the DFIR community have published a wealth of information on what VSCs are, their forensic significance, and approaches to examine them. I’m only providing a quick explanation since Troy Larson’s presentation slides provide an excellent overview about what VSCs are as well as Lee Whitfield’s Into the Shadows blog post. Basically, the Volume Shadow Copy Service (VSS) can backup data on a Windows system. VSS monitors a volume for any changes to the data stored on it and will create backups only containing those changes. These backups are referred to as a shadow copies. According to Microsoft, the following activities will create shadow copies on Windows 7 and Vista systems:

        -  Manually (Vista & 7)
        -  Every 24 Hours (Vista)
        -  Every 7 Days (7)
        -  Before a Windows Update (Vista & 7)
        -  Unsigned Driver Installation (Vista & 7)
        -  A program that calls the Snapshot API (Vista & 7)

Importance of VSCs


The data inside VSCs may have a significant impact on an examination for a couple of reasons. The obvious benefit is the ability to recover files that may have been deleted or encrypted on the system. This ringed true for me on the few cases involving corporate systems; if it wasn’t for VSCs then I wouldn’t have been able to recover the data of interest. The second and possibly even more significant is the ability to see how systems and/or files evolved over time. I briefly touched on this in the post Ripping Volume Shadow Copies Sneak Peek. I mentioned how parsing the configuration information helped me know what file types to search for based on the installed software. Another example was how the user account information helped me verify a user account existed on the system and narrow down the timeframe when it was deleted. A system’s configuration information is just the beginning; documents, user activity, and programs launched are all great candidates to see how they changed over time.


To illustrate I’ll use a document as an example. When a document is located on a system without VSCs - for the most part - the only data that can be viewed in the document is what is currently there. Previous data inside the document might be able to be recovered from copies of the document or temporary files but won’t completely show how the document changed over time. To see how the document evolved would require trying to recover it at different points in time from system backups (if they were available). Now take that same document located on a system with VSCs. The document can be recovered from every VSC and each one can be examined to see its data. The data will only be what was inside the document when each VSC was created but it could cover a time period of weeks to months. Examining each document from the VSCs will shed light on how the document evolved. Another possibility is the potential to recover data that was in the document at some point in the past but isn't in the document that was located on the system. If system backups were available then they could provide additional information since more copies of the document could be obtained at other points in time.


Accessing VSCs


The Ripping Volume Shadow Copies approach works against mounted volumes. This means a forensic image or hard drive has to be mounted to a Windows system (Vista or 7) in order for the VSCs in the target volume to be ripped. There are different ways to see a hard drive or image’s VSCs and I highlighted some options:

        -  Mount the hard drive by installing it inside a workstation (option will alter data on the hard drive)
        -  Mount the hard drive by using an external hard drive enclosure (option will alter data on the hard drive)
        -  Mount the hard drive by using a hardware writeblocker
        -  Mount the forensic image using Harlan Carvey’s method documented here, here, and the slide deck referenced here
        -  Mount the forensic image using Guidance Software’s Encase with the PDE module (option is well documented in the QCCIS white paper Reliably recovering evidential data from Volume Shadow Copies)

Regardless of the option used to mount the hard drive or image, the Windows vssadmin command or Shadow Explorer program can show what if VSCs are available for a given mounted volume. The pictures below show the Shadow Explorer program and vssadmin command displaying the some VSCs for the mounted volume with drive letter C.

Shadow Explorer Displaying C Volume VSCs

VSSAdmin Displaying C Volume VSCs

Picking VSCs to examine is dependent on the examination goals and what data is needed to accomplish those goals. However, time will be a major consideration. Does the examination need to review an event, document, or user activity for specific times or for all available times on a computer? Answering that question will help determine if certain VSCs covering specific times are picked or if every available VSCs should be examined. Once the VSCs are selected then they can be examined to extract the information of interest.


Another Approach to Examine VSCs


Before discussing another approach to examining VSCs it’s appropriate to reflect on the approaches practitioners are currently using. The first approach is to forensically image each VSC and then examine the data inside each image. Troy’s slide deck referenced earlier has a slide showing how to image a VSC and Richard Drinkwater's Volume Shadow Copy Forensics post from a few years ago shows imaging VSCs as well. The second popular approach doesn’t use imaging since it copies data from each VSC followed by examining that data. The QCCIS white paper referenced earlier outlines this approach using the robocopy program as well as Richard Drinkwater in his posts here and here. Both approaches are feasible for examining VSCs but another approach is to examine the data directly inside VSCs bypassing the need for imaging and copying. The Ripping VSCs approach examines data directly inside VSCs and the two different methods to implement the approach are: Practitioner Method and Developer Method.


Ripping VSCs: Practitioner Method


The Practitioner Method uses ones existing tools to parse data inside VSCs. This means someone doesn’t have to learn a new tool or learn a programming language to write their own tools. All that’s required is for the tool to be command line and the practitioner willingness to execute the tool multiple times against the same data. The picture below shows how the Practitioner Method works.

Practitioner Method Process

Troy Larson demonstrated how a symbolic link can be used to provide access to VSCs. The mklink command can create a symbolic link to a VSC which then provides access to the data stored in the VSC. The Practitioner Method uses the access provided by the symbolic link to execute one’s tools directly against the data. The picture above illustrates a tool executing against the data inside Volume Shadow Copy 19 by traversing through a symbolic link. One could quickly determine the differences between VSCs, parse registry keys in VSCs, examine the same document at different points in time, or track a user’s activity to see what files were accessed. Examining VSCs can become tedious when one has to run the same command against multiple symbolic links to VSCs; this is especially true when dealing with 10, 20, or 30 VSCs. A more efficient and faster way is to use batch scripting to automate the process. Only a basic understanding about batch scripting (need to know how a For loop works) can create powerful tools to examine VSCs. In future posts I’ll cover how simple batch scripts can be leverage to rip data from any VSCs within seconds.


Ripping VSCs: Developer Method


I’ve been using the Practitioner Method for some time now against VSCs on live systems and forensic images. The method has enabled me to see data in different ways which was vital for some of my work involving Windows 7 systems. Recently I figured out a more efficient way to examine data inside VSCs. The Developer Method can examine data inside VSCs directly which bypasses the need to go through a symbolic link. The picture below shows how the Developer Method works.

Developer Method Process

The Developer Method programmatically accesses the data directly inside of VSCs. The majority of existing tools cannot do this natively so one must modify existing tools or develop their own. I used the Perl programming language to demonstrate that the Developer Method for ripping VSCs is possible. I created simple Perl scripts to read files inside a VSC and I modified Harlan’s lslnk.pl to parse Windows shortcut files inside a VSC. Unlike the Practitioner Method, at the time of this post I have not extensively tested the Developer Method. I’m not only discussing the Developer Method for completeness when explaining the Ripping VSCs approach but my hope is by releasing my research early it can help spur the development of DFIR tools for examining VSCs.


What’s Up Next?


Volume Shadow Copies have been a gold mine for me on the couple corporate cases where they were available. The VSCs enabled me to successfully process the cases and that experience is what pushed me towards a different approach to examining VSCs. This approach was to parse the data while it is still stored inside the VSCs. I’m not the only DFIR practitioner looking at examining VSCs in this manner. Stacey Edwards shared in her post Volume Shadow Copies and LogParser how she runs the program logparser against VSCs by traversing through a symbolic link. Rob Lee shared his work on Shadow Timelines where he creates timelines and lists deleted files in VSCs by executing the Sleuthkit directly against VSCs. Accessing VSCs’ data directly can reduce examination time while enabling a DFIR practitioner to see data temporally. Ripping Volume Shadow Copies is a six part series and the remaining five posts will explain the Practitioner and Developer methods in-depth.

        Part 1: Ripping Volume Shadow Copies - Introduction
        Part 2: Ripping VSCs - Practitioner Method
        Part 3: Ripping VSCs - Practitioner Examples
        Part 4: Ripping VSCs - Developer Method
        Part 5: Ripping VSCs - Developer Example
        Part 6: Examing VSCs with GUI Tools



轉自 http://journeyintoir.blogspot.com/2012/01/ripping-volume-shadow-copies.html

base64 Encode / Decode

Internet Explorer RecoveryStore(Travelog) 解析工具

RecoverRS

Based on the research in to Internet Explorer’s Automatic Crash Recovery files, two command line applications were created; RipRS and ParseRS; collectively known as RecoverRS.  Detailed information regarding the operation of these two applications is available in Appendix C, the RecoverRS manual.
RipRS is designed to extract ACR files from a raw disk image using known decimal offsets.  A list of known offsets can be obtained by using the search string discussed in the above section titled ‘Finding Compound Files in Unallocated Space’ using programs such as EnCase or FTK.  Using these known offsets, RipRS uses the methodology discussed in the above section titled ‘Carving Compound Files in Unallocated Space’ to determine the compound file’s size.  RipRS then searches the compound file for the string ‘0B00252A-8D48-4D0B-7B79887F2B96’, a GUID that is unique to ACR files.  If RipRS determines that the compound file is in fact an ACR file, it searches the ACR file for strings unique to either recovery store files or tab data files to determine which type the file it.  Once RipRS has determined the ACR file type, the file is written to the output directory specified by the user using the naming convention RecoveryStore.{offset}.dat or {offset}.dat for recovery store files and tab data files respectively. 
ParseRS is designed to extract browsing information from ACR files; either those found on the system or those carved from unallocated space by RipRS.  As mentioned previously, if ACR files are carved from unallocated space, information linking the tab data files with their respective recovery store files and some date/time information will be lost.
 

轉自 http://www.jtmoran.com/tools/default.html

解析Internet Explorer RecoveryStore(Travelog)

Internet Explorer RecoveryStore (aka Travelog) as evidence of Internet Browsing activity

This artifact has attracted my attention of late as I have seen some very useful information here in a few recent cases. Here you find not only browsed urls but webpage details like title (sometimes content) and timestamps. Even data from encrypted pages (https) are stored here in plaintext, which by default IE does not save in internet cache. I have even seen email and facebook passwords here on occasion!

What is RecoveryStore and why is it present?

IE 8 and 9 have a tab recovery feature by virtue of which you can restore all your tabbed browsing sessions if IE crashes, or when you close IE and chose to save tabs on exit (so that they may be reopened automatically when IE is started next time).



With IE8, Microsoft also introduced the concept of a ‘Travelog’. This is a mechanism to track urls (and associated parameters) that are fetched from a page when AJAX is used. AJAX is a technology which enables dynamic refreshes of small portions of a page without reloading the whole page. It was popularized by gmail and subsequently most webpages use it today. With AJAX, your main page url does not change, however the page contents change when your click around in the page (accessing data from other urls), this creates problems as you cannot use the browser back button to go back one click. To solve this problem (with back and forward buttons), the travelog is used to track AJAX urls. Read up more about it on MDSN here


So where is this cached information?

The RecoveryStore can be found under /Application Data on an XP machine and under /AppData/Local on a Vista or Windows 7 machine under subfolder Microsoft/Internet Explorer/Recovery

Location of RecoveryStore files on a Windows 7 Machine

Two folders are present by default, Active and LastActive. Sometimes a couple of other folders are seen, High and Low. All folders contain similar data, a few files with .dat as their name and a single RecoveryStore..dat file per folder. GUIDs are in the standard format {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}.

Analysis of RecoveryStore files Part I

All files are in the Microsoft OLE structured storage container format. When opened with a suitable viewer (many freeware available for this, if you use encase, use ‘view file structure’ to mount), you find many streams (files) within it.

There is a single RecoveryStore..dat file which represents the recovery store preserving tab order and some other information. It references the other .dat files.

RecoveryStore..dat

This file contains 3 or more streams in it. If more than one session (instances of IE) are running, then more streams will be present.

Stream Name
Description
|KjjaqfajN2c0uzgv1l4qy5nfWe
Contains some guids
FrameList
List of DWORDs, function unknown
TSxx
Contains guids of Tabs in Session x (ie,if TS1 then tabs in session 1)

RecoveryStore..dat file viewed in an OLE object viewer
The FrameList stream is shown above

in the filename
The GUID is actually a UUID (version 1), which is comprised of a FILETIME like timestamp and the machine MAC address. The details of this scheme can be referenced from RFC 4122 (http://www.ietf.org/rfc/rfc4122.txt). 

The timestamp is the first 60 bits of the UUID, and this represents the number of 100 second nanosecond intervals since 15 October 1582. Note the only major difference from Microsoft FILETIME values used everywhere else in windows is the starting date which is 01 January 1601 for FILETIME.

This time is going to be the tab/recoverystore created time and can be used to cross check the timestamp on disk for forensic validation. These UUIDs are also found in the ‘|KjjaqfajN2c0uzgv1l4qy5nfWe’ stream in RecoveryStore..dat

Example: {FD1F46CF-E6AB-11E0-9FAC-001CC0CD46AA}.dat
From this UUID, we can extract the timestamp as 01E0E6ABFD1F46CF which decodes to 09/24/2011 12:51:58 UTC.
The last 6 bytes is the MAC address on the machine (00 1C C0 CD 46 AA), it can be from any of the network interfaces on the machine.

Timestamp Easy Conversion Process 
(http://computerforensics.parsonage.co.uk/downloads/TheMeaningofLIFE.pdf)

An easy way of converting the timestamp without messing too much with the math behind it is to subtract the time period between 15 October 1582 and 1 January 1601 and then using a FILETIME decoder program (like DCODE) to do the rest. For the above example, we subtract 146BF33E42C000 (the excess time period) from the original value to get 1CC7AB8BEDC86CF which is decoded as 09/24/2011 12:51:58 UTC.

.dat files

Each file represents a tab in the browser. Inside each file are 3 or more streams.

Stream Name
Description
|KjjaqfajN2c0uzgv1l4qy5nfWe
Contains some guids and last URL of tab
Travelog
List of DWORDs representing each Travelog entry
TLxx
Travelog stream (TL0, TL1, …)

'|KjjaqfajN2c0uzgv1l4qy5nfWe' stream inside a .dat file shown above

Travelog Stream

This stream has a complex binary format which stores many items. The base URL, referrer url and page title are always present. Page content, some timestamps and ajax parameters are optionally present.

I have been studying the format of the Travelog and will shortly publish it as Part II of this blog entry. 


Update: An encase script is now available for download here to parse out travelog info.
 
轉自 http://www.swiftforensics.com/2011/09/internet-explorer-recoverystore-aka.html