網銀被駭找銀行負責?先數位鑑識調查再說

作者:張維君 -02/14/2011

上周,金管會銀行局發布網路銀行定型化契約修正草案,與過去最大不同在於提到將來民眾的網銀帳號若遭人冒用,導致存款被盜轉,銀行不見得會再照單全收,在兩前提下,銀行可減免損害賠償責任。一是銀行能證明客戶有故意或過失,二是超過銀行寄發交易核對單一定期限後才通知銀行交易異常者。


在此次修訂的「個人網路銀行業務服務定型化契約應記載及不得記載事項」草案當中,第九條的重點提到若民眾的網路銀行使用者代號、密碼、憑證、私密金鑰有被第三人冒用、盜用,或其他未經授權的行為,除上述兩情形外,銀行會負起責任。


球員不能兼裁判 若有爭議銀行委由第三方做鑑識
過去一直以來,民眾網銀帳戶若有異常交易的爭議糾紛,銀行為顧及商譽、客戶服務,只要金額不大,多半銀行都會認賠了結。但累積下來也造成銀行一筆可觀的呆帳。而此次修訂草案希望提醒民眾要注意自身上網的安全。金管會銀行局信用合作社組指出,將來民眾若發現交易異常的情形先通報銀行後,銀行會先自行檢查,並委由第三方單位進行鑑識調查,除了銀行端系統外也會調查民眾所使用的電腦。至於民眾電腦被植入木馬是否構成過失,則需要依個案情形判定。


金管會官員指出,這一條在先前版本的定型化契約當中就已存在,此次草案修訂增加須經過數位鑑識調查來判定責任歸屬,較能兼顧銀行、民眾雙方權益,且銀行須負擔所有鑑識調查費用。但實務上,他也認為銀行應該還是會為了商譽從寬認定賠償標準。


倒是過去沒有習慣核對網銀交易核對單的民眾要注意了,將來若發現銀行帳戶金額有交易異常,且超過交易對帳單寄發後一定期限(草案中明定不得少於45日),被拒絕賠償的可能性會提升。因此網銀用戶應儘快勾選寄發交易對帳單的服務選項,自己注意帳戶異常變動情形。


中國數位鑑識產業發展快
日前來台參訪的中國國家信息中心信息安全研究與服務中心副主任葉紅曾提到,中國已發展出一套數位證據司法鑑定的制度,凡是任何要作為呈堂證供的數位證據都須先送到此數位證據實驗室判讀,法官只認可該實驗室出具的報告。該實驗室為獨立於司法機構的獨立第三方單位,且要從事數位鑑識工作還須先取得相關證照資格。


法規帶動需求,除了個人資料保護法,現在再加上網路銀行定型化契約修正案對於數位鑑識服務的需求,政府應儘速制定數位鑑識產業的相關標準與規範。






轉自 資安人

Internet Evidence Finder - IEF

What it does

Simply put, IEF is a software application that can search a hard drive or files for Internet related artifacts. It is a data recovery tool that is geared towards digital forensics examiners but is designed to be straightforward and simple to use.


IEF v4 searches the selected drive, folder (and sub-folders, optionally), or file (memory dumps, pagefile.sys, hiberfil.sys, etc) for Internet artifacts. A case folder is created containing the recovered artifacts and the results are viewed through the IEF v4 Report Viewer where reports can be created and data exported to various formats.


IEF has gone through a great deal of revisions and transformations in its journey to Version 4. There is also now a Portable Edition of IEF v4.


It can currently find:
  • Facebook® Chat
  • Facebook® Web Page Fragments
  • Facebook® Email “Snippets”
  • Facebook® Emails
  • Facebook® Status Updates / Wall Posts
  • Twitter® Status Updates
  • GoogleTalk® Chat
  • Gmail® Email Fragments
  • Gmail® Email “Snippets”
  • Yahoo!® Messenger Chat
  • Yahoo!® Webmail Chat
  • Yahoo!® Messenger – Non-Encrypted Chat
  • Yahoo!® Messenger – Group Chat
  • Yahoo!® Messenger Diagnostic Logs
  • Yahoo!® Webmail
  • Internet Explorer 8® (IE8) InPrivate/Recovery URLs
  • MSN®/Windows Live Messenger® Chat
  • Hotmail® Webmail
  • Messenger Plus!® Chat Logs
  • Firefox® places.sqlite History Artifacts
  • Firefox® formhistory.sqlite Artifacts
  • Firefox® sessionstore.js Artifacts
  • AOL® Instant Messenger Chat Logs
  • MySpace® Live Chat
  • Bebo® Live Chat
  • Limewire.props Files
  • Limewire® v5.2.8 to v5.5.16 Search Keywords
  • Limewire®/Frostwire® v4.x.x Search Keywords
  • Frostwire.props Files
  • mIRC® Chat Logs

To find out the details for each artifact, click on them below:

Facebook® Chat Messages

Description: Messages sent and received using the Facebook® live chat feature. Information found with the message can include the Facebook® profile ID used to send/receive the message, the from/to names and ID’s, and the date/time (in UTC) that the message was sent. However, there are a few different formats of Facebook chat and not all formats include all this data. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, temporary Internet files, the $Logfile (a special NTFS file used for recoverability purposes), file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Facebook® Page Fragments

Description: Facebook® related web pages, including but not limited to the Inbox page, emails, photo galleries, groups, and so on. Most recovered items will be fragments and not the complete page, but attempts are made to recover the entire page and filter out false positives. A header is added to the fragment to aid in viewing the page in its original format. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, temporary Internet files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

MSN®/Windows Live Messenger Chat Messages

Description: Chat messages sent/received using Windows Live Messenger®. Located messages are exported into text files for MSN protocol fragments or into a report file for regular chat log messages. MSN protocol fragments usually only include a line of chat and sometimes the sender’s email address, immediately prior to the message.
Prior versions of IEF attempted to recreate the original log files but the new method of searching for individual messages enables much more chat to be recovered.
(Note: The Windows Live Messenger® search option is backwards compatible with MSN Messenger®, and these two program names are used interchangeably in IEF.)
Possible locations: MSN/WLM chat log files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Yahoo!® Chat Messages

Description: Chat messages sent and received using Yahoo!® Messenger. These chat messages are logged in an encrypted format that requires the local username to decrypt the message. The username is usually the first half of the email address used to log-in (e.g. if the log-in email address is jasonho@yahoo.com, then the username is jasonho). IEF v4 can decrypt messages that have not been deleted without requiring a username, however. When searching unallocated space or memory dumps, etc., a number of false positives are unavoidable due to the format of these chat logs and because there is no way to determine if a chat log was decrypted successfully or not.
IEF uses a number of validations to filter out these false positive hits and now with v4 you can specify an acceptable time frame and the filtering strictness to further filter out false hits.
Possible locations: Yahoo! Messenger chat logs, live memory dumps, the pagefile.sys/hiberfil.sys files, the $Logfile (a special NTFS file used for recoverability purposes), file slack space, and unallocated clusters
Estimated Likelihood of Recovery: Medium to High

GoogleTalk® Chat Messages

Description: Messages sent or received using GoogleTalk® live chat within Gmail® webmail. Information found with the message can include the message ID, the Sender/Recipient email addresses, and the sender/recipient’s ID. Dates and times are not available to recover at this time. This search option may also recover chat left behind from other chat programs that utilize the ‘Jabber’ chat protocol (the sender/recipient ID will be your clue, containing an abbreviated name of the client used by that person). Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: Low

Yahoo!® Webmail Chat Messages

Yahoo!® Webmail Chat Description: Messages sent or received using the live webmail chat found in Yahoo!® Webmail. Information found with the message can include the Status number, the version number and vendor ID, the session ID, and the Sender/Recipient usernames. Dates and times are not available in this type of artifact to recover at this time.
Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and possibly on other areas of a hard drive
Estimated Likelihood of Recovery: Low

Gmail® Email

Description: This search will recover Gmail® email fragments left behind in live memory. Information found will vary and this search does not parse any information out. IEF will do its best to clean up the located fragment and convert encodings into a more readable format. Some fragments will be of the folder view with the sender name/address, subject, and first segment of the body of the email.
Please see the “Gmail Parsed Email Snippets” search for a parsed version of this search.
Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and possibly unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Limewire® v5.2.8 – v5.5.16 Search History

Description: Search keywords left behind in live memory by Limewire® (tested with Limewire® v5.2.8 – v5.5.16). Search keywords/terms that are recovered have an associated number indicating how many search results were returned for that search term at the time the keyword was left in memory. The recovered search terms are search keywords that were entered by the local user. Other search keywords that were passed through the client (“Incoming Searches”) from other clients on the P2P network are not recovered. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and possibly unallocated clusters
Estimated Likelihood of Recovery: Low

Limewire.props files

Description: This search finds fragments of Limewire.props files. These files contain configuration data for the Limewire® peer to peer file sharing client and can include geo-locations, recent downloads, and many other useful items. Possible locations: Limewire configuration folders, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium to High

IE8 InPrivate/Recovery URLs

Description: These artifacts are URLs visited during “InPrivate” browsing in IE8 and URLs that are saved in Internet Explorer recovery files (used to recover tabs in the event of a crash). At this time, there is no known method of distinguishing between these two types of URL artifacts, but if the location of the artifact is in an IE8 recovery file, it is not from InPrivate browsing. Also found with the URLs is a page title or description, but this is not always present. Possible locations: IE8 recovery files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Yahoo!® Messenger Group Chat

Description: Messages sent or received in Yahoo!® Messenger Group chat rooms. Information found within these fragments can include the date/time, the username that sent the message, and the message itself. The name of the Yahoo! Messenger group that the message is sent within is not present in these artifacts for recovery. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Yahoo!® Webmail email

Description: Email messages, email compose pages, and folder views from Yahoo!® webmail fragments. Multiple types of Yahoo!® webmail interfaces are supported, including ‘Classic view’ and the newer Yahoo!® Webmail view. These recovered artifacts may be complete in some cases but much of the time they will be partial fragments. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium to High

Hotmail® Webmail email

Description: Email messages, contact listings, and folder views from Hotmail® webmail fragments. These recovered artifacts may be complete in some cases but much of the time they will be partial fragments. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium to High

AOL® Instant Messenger chat logs

Description: AOL® Instant Messenger (AIM) chat logs. The entire log is searched for, not individual messages. Possible locations: AIM chat log files, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium

Messenger Plus!® chat logs

Description: Messenger Plus!® is an add-on for Windows Live Messenger®/MSN Messenger® that adds a number of features to the chat program. The logs it creates are different from the traditional MSN/WLM chat logs and it also provides an option of encrypting the chat logs. Encrypted chat logs can not be recovered at this time, but some of the encrypted chat can be recovered in the MSN/WLM search as MSN protocol fragments. Possible locations: Messenger Plus! chat log files, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium

MySpace® chat

Description: Messages sent or received in MySpace® live chat. Information found within these fragments can include the status of the message, the date/time, the sender ID, target ID, and the message itself. Some user info is also recoverable, such as the real name/username associated to a MySpace ID, image URL, and other information. This information is saved to a ‘User Info’ report. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Bebo® chat

Description: Messages sent or received in Bebo® live chat. Information found within these fragments can include the status of the message, the date/time, the sender username, target username, and the message itself. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Non-encrypted Yahoo!® Messenger chat

Description: Non-encrypted chat messages left behind by Yahoo!® Messenger. These messages are artifacts from the actual Yahoo!® Messenger chat window. No username(s) are required to recover these messages. Messages of this type include the sending user name, the date/time (local time, not UTC), and the message itself. The recipient is not found in these fragments but can usually be ascertained by viewing the chat conversation. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Facebook® Email “Snippets”

Description: This search will recover Facebook® email “snippets” (previews of a full email message). This artifact is left behind when a user is viewing their Inbox or Sent Messages folder in their Facebook® account. It can include the Subject line, Original Author user ID, Recent Authors user IDs (the participants of the email conversation), Time Last Updated (the last time a message was posted in the thread), thread ID (ID# of the message in the user’s mailbox), and the “snippet” itself. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and possibly unallocated clusters
Estimated Likelihood of Recovery: Medium

Gmail® Email “Snippets”

Description: This search will recover Gmail® email “snippets” (previews of a full email message). This artifact is left behind when a user is viewing the Inbox folder in their Gmail® webmail account. It can contain the email addresses included in the message, the subject, file names of attachments, the date/time (in local time), read/unread status, and the “snippet” itself. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and possibly unallocated clusters
Estimated Likelihood of Recovery: Medium

Frostwire.props Files

Description: This search finds fragments of Frostwire.props files. These files contain configuration data for the Frostwire® peer to peer file sharing client and can include geo-locations, recent downloads, and many other useful items. Possible locations: Frostwire configuration folders, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Medium to High

Twitter® Status Updates

Description: This search will recover Twitter® status updates. This artifact is left behind in several formats when a user is updating their status or viewing another person’s status update. It can include the Name of the user, the screen name, created time, status ID#, where the status was updated from, geo-tags, if the update is a “retweet”, the profile image URL of the user, and the text of the status update. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and possibly unallocated clusters
Estimated Likelihood of Recovery: Medium

Limewire®/Frostwire® Search Keywords

Description: Search keywords left behind in live memory by version 4 of Limewire® and Frostwire® (tested with most Limewire/Frostwire v4 clients). Search keywords/terms that are recovered have an associated number indicating how many search results were returned for that search term at the time the keyword was left in memory. The recovered search terms are search keywords that were entered by the local user. Other search keywords that were passed through the client (“Incoming Searches”) from other clients on the P2P network are not recovered. Possible locations: Live memory dumps, the pagefile.sys/hiberfil.sys files, and possibly unallocated clusters
Estimated Likelihood of Recovery: Low

Firefox® places.sqlite History Artifacts

Description: This is a first-of-its-kind search that recovers browsing history URLs from the places.sqlite files Firefox® uses to store browsing history and other information. The entire SQLite file is not required, only the individual entries. Due to the format and nature of this artifact, some parsing must be done to separate the URL and web page title items. Sometimes this parsing will be incorrect, in this case please see the unparsed column for the original data. Recovered items include the parsed URL, parsed web page title, visit count, whether or not the URL was typed by the user, last visited time (in UTC), and the unparsed URL/web page title. Note 1: Parsing live (undeleted) places.sqlite files is better done with other Firefox history parsing software as there is more information to be found in these files and the URL/title can be parsed more accurately, but this search is very useful for live memory dumps and deleted records, records in the pagefile.sys/hiberfil.sys files, etc.
Note 2: if any of the individual items for each recovered record were not recovered or contain garbage information, that record should be verified as it may not be reliable information and could be a false positive hit.
Note 3: This search recovers artifacts from Firefox v3.5 to v4.0b8. It does not recover artifacts from Firefox v3.0.x as those older versions use a different database format. Firefox v1-2 do not use the places.sqlite file and therefore are not supported in this search.
Possible locations: Firefox profile folders, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Firefox® formhistory.sqlite Artifacts

Description: This is a first-of-its-kind search that recovers query history from the formhistory.sqlite files Firefox® uses to store web page form entry history (e.g. a search entered into Google or other search engine). The entire SQLite file is not required, only the individual entries. Recovered items include the fieldname (the name of the textbox the where the query was made), the value (the text that was entered into the textbox on the web page, e.g. the search term entered), number of times used, the date/time (UTC) the query was first made, and the date/time (UTC) was last made.
Note 1: At this time, IEF only recovers the fieldnames “q” and “query” (commonly used in search engines such as Google) and “searchbar-history” (searches made from the Google toolbar). Other fieldnames may be added in the future.
Note 2: if any of the individual items for each recovered record were not recovered or contain garbage information, that record should be verified as it may not be reliable information and could be a false positive hit.
Note 3: This search recovers artifacts from Firefox v3.0.x to v4.0b8. Firefox v1-2 do not use the formhistory.sqlite file and therefore are not supported in this search.
Possible locations: Firefox profile folders, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Firefox® sessionstore.js Artifacts

Description: This search will recover URLs from the sessionstore.js file Firefox® uses to store URLs to facilitate recovering from a web browser crash. The entire sessionstore.js file is not required, only the individual entries. Recovered items can include the URL, the web page title, and the referring URL. Some items will have the web page title while some will only have the referring URL. Possible locations: Firefox profile folders, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Facebook® Status Updates / Wall Posts

Description: This search will recover Facebook® Status Updates and Wall Posts. These can be from the local user or from other users on Facebook. Recovered items can include the User ID and Name of the person making the status update or wall post, and the text of the update/post itself. This artifact does not contain the date/time that the update or post was made. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: High

Facebook® Emails

Description: This search will recover emails sent or received on Facebook®. Recovered items can include the Logged In User ID (the ID of the person logged in to Facebook when the email was sent/received), the subject of the email, the recipients of the email, the Last Updated Time (last time a message in the thread was added), the Original Author, the Thread ID#, the Time Rendered (local time), the Author’s User ID and Name, whether or not it was sent from a mobile device, any attachments, and the message. Possible locations: Temporary Internet files, live memory dumps, the pagefile.sys/hiberfil.sys files, file slack space, and unallocated clusters
Estimated Likelihood of Recovery: Medium

mIRC® Chat Logs

Description: This search will recover mIRC® chat logs and other logs (e.g. connection logs) saved by mIRC®. Each session located with these log fragments is saved separately into text files. Possible locations: mIRC log folders, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: Low to Medium

Yahoo!® Messenger Diagnostic Logs

Yahoo!® Messenger Diagnostic Logs Description: This search will recover the diagnostic logs saved by Yahoo! Messenger. These logs are created when a user attempts to report a problem with Yahoo! Messenger to Yahoo! Support by selecting the Help menu in Yahoo! Messenger and clicking “Report a Problem to Yahoo!”. They contain a wide variety of information including chat messages, user actions, files transferred, and more. A good number of these events have been tested and are parsed by IEF v4. There are some events that are not parsed at this time, but by checking the “Include unparsed entries” option in IEF, these events will still be included with some info being partially decoded.
Possible locations: Yahoo! Messenger program log folders, live memory dumps, the pagefile.sys/hiberfil.sys files, and unallocated clusters
Estimated Likelihood of Recovery: High

Requirements

IEF v4 has been tested on Windows XP, Windows Vista, Windows XP 64-bit, Windows Server 2008 64-bit, and Windows 7 (32-bit and 64-bit). It does not support running on Windows 2000 or Windows 9x.
IEF has been tested with and works on single ‘dd’ image files, physical drives connected via a write blocker or otherwise, Encase® PDE mounted images, FTK® Imager v3 Image Mounting, and files (such as pagefile.sys and hiberfil.sys, and memory dump files). IEF is also compatible with Mount Image Pro (tested with version 3.26.522).
Links to image mounting software:
AccessData FTK Imager v3
Download a trial version of Mount Image Pro
Visit the Mount Image Pro website

System requirements are minimal; if you have the required hardware for the operating system you are running, you can run IEF. However, a fast CPU and at least 2GB of RAM is recommended.
The speed of the storage device being searched (or containing the files being searched) will make a large difference in speed as well. A RAID 0 or SSD set-up is recommended.

Notice regarding artifact recovery

Please remember: IEF is, in essence, an automated data recovery tool. If the data does not exist, is fragmented/damaged (or in a format not tested by JADsoftware Inc.), or a special circumstance is complicating the search process, the data/artifacts will not be recovered. Some artifacts will also be easier to find, more abundant, or more likely to be recovered than others.
There will be recovered false positive hits in some cases, or partially recovered artifacts. This is due to the inherent nature of artifact recovery.

IEF V4 Portable Edition

With the release of IEF v4, a Portable Edition has been introduced.
The Portable Edition comes on a larger thumb drive (8GB at this time), can run directly from the thumb drive without being installed, and includes features to search Volume Shadow Copies on a live Windows Vista or Windows 7 (32 and 64 bit) system.
The Volume Shadow Copy searching is available in the Quick Search and Full Search – Sector Level searches. Both searches require that the Microsoft Volume Shadow Copy Service administrative command-line tool (vssadmin.exe) and the Microsoft utility mklink.exe (used to create symbolic/hard links) are present on the live system. If they are not present on the system, the volume shadow copies can not be enumerated (or mounted in the Quick Search). Please also note that these executables can not be copied from one system to another.
The Volume Shadow Copy portions of both searches are covered in the IEF v4 User’s Manual.
The Portable Edition is not available for download, but the files can be sent on special request to test out the functionality in demo mode. Please send a message using the Contact page to request a trial copy.
Portable Edition screenshots:
The Quick Search:




The Full Search – Sector Level search:

IEF Report Viewer

With the release of IEF v4 comes a new component, the IEF Report Viewer. The Report Viewer allows for more control over which results are included in the final report, provides sorting and column rearrangement, multiple export formats, and can produce a complete, easy-to-use, easy-to-navigate HTML report that includes all the artifacts recovered.
IEF v4 provides an option at the end of a search to immediately load the case folder into the Report Viewer. If you don’t choose to open the case in the Report Viewer at that time, you can run the IEF Report Viewer later and you’ll be prompted to open the case.
Screenshots:
Here is a view of the main screen:

All items are checked by default, which means if you export search results or create a report, everything will be included.
To sort the results on different columns, simply click the column header. The arrows in the header will turn green to indicate if the sorting is ascending or descending.
You may uncheck individual items or entire search categories, and if you save the case (File -> Save Case), your selections will be saved. Also, if you sort any search results and then save the case, the sorted items will be saved in that sorted order and will be in that order when the case is loaded at a later date.
Un-checking an item never deletes it, only the “Checked” state changes.

Below is a screenshot showing the File menu.



In this menu you can save the case, export a single category, export all the categories, or create a report.
The export formats currently available are CSV (Comma Separated Values), Tab-delimited (a.k.a. Tab Separated Values, or TSV), HTML, or Excel (this option requires that you have Microsoft Excel installed on your system).
If a search contains individual exported files, you must use the HTML Export to create a report and export all the files belonging to that search category. The exported files will be linked into the HTML file. With large reports, be sure to save the report to a newly created subfolder in order to contain all the files being exported.
If you select Create Report, a complete HTML report is created for all the checked search categories/sub-items, including any files belonging to the search results.

Home Edition

A Home Edition is being developed for the home user, with reduced features and a reduced purchase price. Please stay tuned.

Trial Keys / Additional Evaluation of IEF v4

On a case by case basis, a trial key can be obtained for either the Standard or Portable Editions of IEF v4. Please send a message using the Contact page to request a trial key.

IEF v3 Support

Support for IEF v3 will continue until May 2011, but no new features will be added.

Upcoming features

  • Multi-threading

Licensing

Due to the amount of time required to develop, maintain, and support IEF, it is no longer free to Law Enforcement and the purchase price has increased. However, a substantial discount is provided for law enforcement, accessible through the Law Enforcement Portal.


轉自JAD

Google Chrome Browser Profile

 

Windows Vista/Windows 7


Author Name
Joe Garcia

Artifact Name
Google Chrome Browser Profile Folder (Windows Vista/Windows 7)

Artifact/Program Version
Windows Vista/Windows 7

Description
As part of a lot of Digital Forensics investigations, obtaining information of the user’s browsing habits is an important step. We see lots of articles on IE & Firefox, but what about Google’s Chrome Browser? Like Firefox before it, Chrome is steadily gaining in the browser market share. This post looks to point out where to find the Chrome user’s Profile folder. Most times, this will be saved as “Default”, but be on the look out for multiple profiles. Once you locate and extract the Chrome Profile folder (listed below) from your image, you can use tools like ChromeAnalysis or ChromeForensics to assist you in parsing out the information stored within it. You will get the following data, which is stored in SQLite files:
History (Web, bookmarks, downloads and search terms)
Cookies
Web Logins
Archived History (Web History and search terms)
Bookmarks (This is in a non-SQLite format)

File Locations
HardDrive\Users\USERNAME\AppData\Local\Google\Chrome\User Data\Default

Research Links
Get Google’s Chrome Browser HERE

Forensic Programs of Use
ChromeAnalysis from forensic-software.co.uk: http://forensic-software.co.uk/chromeanalysis.aspx
ChromeForensics by Woanware: http://www.woanware.co.uk/?page_id=70



Windows 2000, Windows XP, Windows Server 2003


Author Name
Joe Garcia

Artifact Name
Google Chrome Browser Profile Folder

Artifact/Program Version
Windows 2000/Win XP/Windows Server 2003

Description
As part of a lot of Digital Forensics investigations, obtaining information of the user’s browsing habits is an important step.  We see lots of articles on IE & Firefox, but what about Google’s Chrome Browser?  Like Firefox before it, Chrome is steadily gaining in the browser market share.  This post looks to point out where to find the Chrome user’s Profile folder.  Most times, this will be saved as “Default”, but be on the look out for multiple profiles.  Once you locate and extract the Chrome Profile folder (listed below) from your image, you can use tools like ChromeAnalysis or ChromeForensics to assist you in parsing out the information stored within it.  You will get the following data, which is stored in SQLite files:
History (Web, bookmarks, downloads and search terms)
Cookies
Web Logins
Archived History (Web History and search terms)
Bookmarks (This is in a non-SQLite format)
File Locations
HardDrive\Documents and Settings\USERNAME\Local Settings\Application Data\Google\Chrome\User Data\Default
Research Links
Get Google’s Chrome Browser HERE
Forensic Programs of Use
ChromeAnalysis from forensic-software.co.uk: http://forensic-software.co.uk/chromeanalysis.aspx
ChromeForensics by Woanware: http://www.woanware.co.uk/?page_id=70



轉自Artifacts

Analisando material pornográfico com o FTK - EID

Em computação forense, algumas vezes nos deparamos com casos que envolvem conteúdo pornográfico: seja uma simples ação para garantir que a política de segurança da companhia está sendo respeitada pelos usuários ou em uma investigação mais complexa. Brincadeiras a parte, passar o dia vasculhando arquivos pornográficos pode ser desagradável ou até mesmo costrangedor...



Para resolver esta questão, o Forensic Toolkit (FTK), fornecido pela AccessData, traz um recurso muito interessante chamado EID: Explicit Image Detection. Basicamente, este é um recurso que localiza, identifica e classifica todos os arquivos de imagens (GIF, JPG, PNG, etc..) das evidências do caso em um ranking que vai de 0 (não pornográfica) até 100 (pornografia explicita). Para detecção deste conteúdo, o FTK utiliza três profiles:


  • X-DFT: profile padrão, sempre selecionado, gera um ranking bem balanceado entre velocidade e acerto.
  • X-FST: profile para varredura mais rápida, é utilizado também para gerar ranking de pastas de arquivos, baseado no número de arquivos desta pasta que alcancem um score alto no ranking de pornografia. Foi desenvolvido com uma tecnologia diferente do X-DFT, para permitir uma resposta rápida em um grande volume de imagens, e devido a agilidade do algorítimo, pode mesmo ser utilizado em aplicações que exigem análise em tempo real das imagens.
  • X-ZFN: profile para varredura que gera o menor número de falsos negativos, recomenda-se utilizar este profile para uma segunda análise (Additional Analysis), apenas para as pastas de arquivos identificadas como pornográficas pelo X-DFT.

Gerei uma biblioteca com algumas imagens "inicentes" e outras imagens pornográficas para analisar o comportamento da ferramenta.  Para facilitar a visualização dos resultados, todas as imagens eróticas/pornográficas foram armazenadas em um diretório "have fund pics".
Neste diretório é possível perceber que o X-FST não acusou apenas uma das oito imagens potencialmente suspeitas, o que deixou o diretório com um ranking bem elevado:


Dentro deste diretório, criei outro diretório, desta vez com quatorze itens, conforme print abaixo. Novamente, apenas uma pequena parcela (três imagens) foram "ignoradas" pela ferramenta, mantendo um nível de acerto muito satisfatório:


O teste que eu fiz contou também com outras imagens, em outros diretórios, com conteúdo "inocente". Os resultados foram os seguintes:
  • De 160 imagens não pornográficas, o sistema acusou 21 falsos positivos
  • De 23 imagens pornográficas, o sistema acusou 4 falsos negativos (sendo que duas são imagens em b&w)
Resultado final: de 183 imagens utilizadas para o teste, o sistema me indicou 40 (19 realmente pornográficas e 21 falsos positivos) imagens para serem analisadas, ou seja, aproximadamente 22% da amostra!


EID funciona mesmo!!
Paul Henry escreveu um texto muito bom sobre o EID no blog da SANS, analisando um conjunto de 60.000 (!!!) imagens. Vale a pena a leitura!



轉自brazil forensics blog

RecentDocs

Author Name
Joe Garcia


Artifact Name
RecentDocs


Operating System
Windows XP, Vista, Win7


Description
When starting a forensic examination, a great first artifact to check out is RecentDocs (or Recently Used Documents).   By default, Windows will display 15 items in the “My Recent Documents” menu option.  This will include .doc, .jpg, .pdf, etc files.  This is a great way to get a quick look at what files the subject of your investigation has opened recently. For example, for Law Enforcement officers, this is a great place to look if you have to investigate a suspicious death.   Your victim may have actually created a suicide note on their computer and this artifact can help you find it.  For Corporate investigators, your subject may have been snooping around for the recipe of your company’s “Secret Sauce” (or whatever proprietary data you wish to insert here).  This artifact might show the document being opened on your subject’s computer.  This can be used to corroborate other evidence obtained during your investigation.

When opening this artifact in a program such as MiTeC’s Windows Registry Recovery or AccessData’s Registry Viewer, you will see the following:



RecentDocs artifact in Windows Registry Recovery by MiTeC
If you look at the Data in the “MRUListEx” Value, it will always start with the document that was opened most recently and work it’s way back. So in this case, document “08″ was opened most recently. Each entry in the “MRUListEx” is four bytes in length. So going back four bytes from “08″, we can see that “07″ was the next most recent document opened in this example.

You can also use everyone’s favorite registry parsing tool RegRipper to accomplish the same goal (and better might I add). RegRipper displays the RecentDocs in order from last opened to first opened. Again, this is defined by the default max number. Other documents opened earlier on will not be listed here.



RecentDocs displayed in RegRipper


Registry Keys
NTUSER.dat


File Locations
NTUSER\Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs


Research Links
- Default Max Number of Recent Docs (Microsoft TechNet): http://technet.microsoft.com/en-us/library/cc975956.aspx


Forensic Programs of Use
AccessData’s Registry Viewer
Harlan Carvey’s RegRipper
MiTeC’s Windows Registry Recovery


轉自 http://forensicartifacts.com/2011/02/recentdocs/

Computer Forensics How-To: Microsoft Log Parser

Posted by Chad Tilbury

Filed under Computer Forensics, Evidence Analysis, Incident Response, Registry Analysis, Windows IR
As any incident responder will agree, you can never have too many logs. That is, of course, until you have to analyze them! I was recently on an engagement where our team had to review hundreds of gigabytes of logs looking for evidence of hacking activity. I was quickly reminded of how much I love Microsoft Log Parser.

Log Parser is often misunderstood and underestimated. It could possibly be the best forensic analysis tool every devised. Imagine having the ability to take almost any chunk of data and quickly search it using SQL-based grammar. That's Log Parser in a nutshell. It is a lightweight SQL-based search engine that operates on a staggering number of different input types (see Figure 1). Yes, I know that tools like Splunk and Sawmill are built around this same idea, but keep in mind that Log Parser was written in the year 2000. I am constantly amazed at the power it affords the forensic analyst, and you can't beat the price (free). Save perhaps memory analysis, there isn't much it can't accomplish for an incident responder.

Figure 1: Architecture Diagram from Log Parser Documentation
Figure 1:  Architecture Diagram from Log Parser Documentation
In my mind, two things have limited the use of Log Parser in the forensics community: the command-line requirement and the fear of SQL queries. Neither is much of an obstacle, and since this is a how-to, let's debunk both.

Log Parser GUI


Log Parser's command-line isn't particularly onerous, but when staring at logs all day, I'm not afraid to admit that I prefer a GUI. There are several free options available, but I find Log Parser Lizard to be head and shoulders above the competition [1]. A few notable features of Log Parser Lizard:
  • Abstracts away from command line parameters allowing the user to focus solely on SQL queries
  • Allows column sorting, showing different views of the data without re-running the query (a big time saver when working with gigabytes of logs)
  • Includes an advanced grid option that gives Excel-like filtering capabilities and the ability to do Averages, Counts, Max, Min, and Sum equations on the fly
  • Simple interface for building charts
  • Tabbed results allows multiple queries to be run and compared
  • Contains a repository for saved queries, allowing you to organize your collection

I find the last feature to be especially helpful because every incident is different, and I frequently tweak queries. It is nice to be able to look through my archive or save a new one for future use. I use an "Examples" folder to save interesting solutions so I can refer back to them when building complicated searches.

Figure 2: Saved Queries Organized by Log Parser Lizard
Figure 2:  Saved Queries Organized by Log Parser Lizard

SQL Query Basics


The Internet is rife with excellent examples of Log Parser queries. I'll cover a few here and provide some links to more comprehensive lists [2] [3] [4]. To really learn Log Parser I recommend grabbing some sample data, doing a Google search, and just playing with whatever queries strike your fancy. Like any computer language, there are multiple ways to achieve the same results, and taking the time to understand different queries is a quick way to learn the various functions and syntax. Do not be overwhelmed -- you can create very powerful queries with a very limited Log Parser vocabulary. As an example, consider the following query:

SELECT
EXTRACT_EXTENSION(cs-uri-stem) as Extension,
Count(*) as Total
FROM [IIS logs]
GROUP BY Extension
ORDER by Total DESC

I often run this query because it gives me a quick view of the different file types that were requested from the web server. Breaking this down into its components, the SELECT clause tells Log Parser what elements of the log file we wish to display. Cs-uri-stem is an IIS log field that records the page requested from the web server [5]. The FROM clause tells Log Parser what the inputs will be. SELECT and FROM are the only required elements of a query. The GROUP BY clause is necessary when using an aggregate function, like "Count", to give the total requests for each extension. Finally, the ORDER clause is optional but tells Log Parser to order the displayed results according to the value of Total in descending order (DESC).

Figure 3: Log Parser Output Showing File Extension Counts from IIS
Figure 3: Log Parser Output Showing File Extension Counts from IIS
The output in Figure 3 gives me a good starting point for my review. Knowing the multitude of CGI vulnerabilities that exist, I would certainly want to look deeper there. Similarly, I would also plan to investigate what .pl and .exe files are being accessed on the webserver. The next step is to run a follow-up query:

SELECT
EXTRACT_EXTENSION(cs-uri-stem) as Extension,
sc-status as StatusCode,
Count(*) as Attempts
FROM [IIS logs]
WHERE Extension = ''cgi'
GROUP BY Extension, StatusCode
ORDER by Attempts DESC

Figure 4: Log Parser Output Showing CGI Extensions by HTTP Status Code
Figure 4: Log Parser Output Showing CGI Extensions by HTTP Status Code
I added two items to this query. The first, sc-status, provides the HTTP status code for the request, indicating whether the web requests were successful (200s) or unsuccessful (typically 400s and 500s) [6]. The second addition is the WHERE clause, giving the ability to filter my results. In this case, I indicated I only wanted to see the count of status codes for files with a CGI extension. The WHERE clause is incredibly helpful for culling output and is the backbone of many Log Parser Queries. Looking at the results in Figure 4, I can see there were no successful requests for CGI files on this server. They were either not found (404) or the server refused to respond to the request (403).

A final action might be to take a look at some of the CGI queries to determine whether the errors were due to misconfigurations or nefarious activity. Since I want to see all fields from the logs related to CGI files, my query will be quite simple (* indicates all fields):

SELECT *
FROM [IIS logs]
WHERE EXTRACT_EXTENSION(cs-uri-stem) = ''cgi'

Figure 5: Log Parser Output Listing Requests for CGI files
Figure 5: Log Parser Output Listing Requests for CGI files
A quick review of the results in Figure 5 shows requests for several suspicious CGI files as well as a browser user agent of "Nikto". Based on this information, I can surmise that this web server was scanned using the Nikto vulnerability scanner on 10/13/10 at 1:03:28 UTC.

The key takeaway is that during a log review, you will be running multiple queries to cut across a massive amount of data. By slicing the data in different ways, you have a much better chance of finding anomalous or malicious activity than if you were to attempt to review the logs manually.

Figure 6: Parsing the Registry
Figure 6: Parsing the Registry

Using Log Parser to Query the Windows Registry


Log Parser has a myriad of uses other than just parsing text files. The Windows Registry is a great example of a very large binary file that Log Parser can natively search. Figure 6 shows an example of sorting the Registry by LastWriteTime. In this case, I asked Log Parser to return the Path, KeyName, ValueName, Value, and LastWriteTime of any Registry entry updated between 11/1/10 and 11/6/10 from the HKLM, HKCU, HKCC, HKCR, and HKU hives. This system was suspected of being compromised at the beginning of November, and we were looking for any changes precipitated by the intruders. Among other things, the results make it clear that WinSCP was installed on the system during that timeframe.

You might have noticed in my query that I specified a machine name, \\HYDL56, for each hive. This notation allows querying of remote machines over the network. It is particularly useful if you are searching multiple systems for a specific indicator of compromise. Alternatively, I could have run the same query on the local machine by just specifying the hives of interest (HKLM, HKCU, ...). This is a good example of when the command line version can be helpful, particularly when built into live response scripts.

Unfortunately I am not aware of any easy way to use Log Parser to query offline Registry files that we might pull from a forensic image. The current version of Log Parser does not accept offline Registry files as input. If you were truly motivated, you could extract data from the Registry hives in text form and pipe to Log Parser, but it would need to be a special case to be worth the effort.

Usage Tips


1. Start with high-level queries, and view your logs from many different perspectives
Reviewing HTTP status codes, looking for excessively long URI stems and queries, and searching for known bad keywords like "xp_cmdshell" are all excellent ways to identify SQL injection. By looking for the same attacks in different ways, you increase your chances of finding that needle in the haystack.

2. Use the principle of Least Frequency of Occurrence
Malicious activity on your system is by definition anomalous and will usually be some of the least frequent events on a system. Use Log Parser to trend activity such as hourly hits to critical .aspx pages and look for data that stands out. If you see thousands of 404 errors in your logs and only a few 403 errors, or a grouping of abnormal entries at 1AM on Saturday, those items might be worth investigating.

3. Request more data elements than you think you need
Often times a more in-depth investigation can be avoided with just a little more information. As an example, sometimes adding the web request query string (cs-uri-query) is much more helpful than just reviewing the page requested (cs-uri-stem) alone (Figure 7).

Figure 7: Extra Fields Can Make a Big Difference
Figure 7: Extra Fields Can Make a Big Difference

4. Get familiar with the built-in functions
Log Parser includes 80+ supporting functions that can greatly simplify queries. I used EXTRACT_EXTENSION in the examples above, and there are many others like EXTRACT_PATH, EXTRACT_FILENAME, STRLEN, TO_LOWERCASE, etc. [7]

5. Take advantage of the copious documentation available
I have only touched on a few of Log Parser's capabilities. It can slice and dice Event Logs (both .EVT and .EVTX) with aplomb. You can perform complicated searches of a live file system, including using functions like HASHMD5_FILE to compare MD5 hashes. Remote systems can be queried and large scale searches of Active Directory objects can be performed. Once you learn the basics, its power is really only limited by your creativity. Log Parser installs with excellent documentation, and there is even an entire book on the subject [8].

References


[1] Log Parser Lizard. If you like the tool I recommend paying $10 for the "Pro" version to encourage future development!
[2] Forensic Log Parsing with Microsoft's LogParser by Mark Burnett. This is an extremely good article covering incident response on IIS servers
[3] How To Analyze IIS logs with example SQL code. Numerous examples of SQL queries
[4] Dave Kleiman did an excellent post to the SANS blog showing how to use Log Parser for USB device information retrieval
[5] W3C IIS Fields
[6] HTTP Status Codes
[7] Log Parser Functions
[8] Microsoft Log Parser Toolkit book (Gabriele Giuseppini). Trying to cover even a fraction of Log Parser's functionality in a blog post is daunting because the topic is much better suited to a technical reference. Giuseppini is the tool author and he and his co-authors do a superb job of teaching it using easy to follow examples.



轉自SANS

A Quick Look at Volatility 1.4 RC1 - What's New?

Posted by lennyzeltser

Volatility is a popular open source framework for performing memory forensics. The current production version of Volatility is 1.3. The Volatility development team is putting finishing touches on version 1.4, which is currently in the Release Candidate 1 status. While there may still be some bugs to be ironed out, Volatility 1.4 RC1 is sufficiently stable for general exploration and experimentation.

I'd like to briefly highlight some of the changes that were made to Volatility since its 1.3 release. This note is designed for individuals who are already somewhat familiar with Volatility 1.3, and are wondering what to expect from 1.4:

Volatility 1.3 only supported the analysis of Windows XP memory images. Volatility 1.4 includes basic support for analyzing memory images of Windows Vista and Windows 7.

The plugin architecture has changed from version 1.3 to 1.4. The good news is that the most popular plugins have already been ported to version 1.4. Moreover, the most useful plugins that needed to be installed separately in version 1.3 have been incorporated into the core Volatility 1.4 distribution. This means that it's easier to install the framework. This also means that the plugins are more uniform in their usability, such as the command-line parameters they take.

VolRip (rip.pl), which can be used for examining registry contents from the memory image, is presently only compatible with version 1.3 of Volatility.

The logic behind the "psscan2" plugin for Volatility 1.3 has been incorporated into the new "psscan" plugin for Volatility 1.4. The "psscan3" plugin's logic has not yet been ported to Volatility 1.4.

Volatility Analyst Pack, which included popular plugins for analyzing malware through memory forensics has been retired. It has been replaced with the malware.py library, which implements malfind, apihooks, orphanthreads, mutantscan, ldrmodules and other malware-related Volatility plug-ins.

You can now include the Volatility plugin command at the very end of the command line, even after the "-f" parameter. If you don't want to define the memory image's file name with "-f", you can also define it as a variable ("export VOLATILITY_FILENAME=/var/tmp/memory.img") and then repeatedly invoke Volatility without the "-f" parameter.

Some of the plugin names have changed in version 1.4. For instance, "memdmp" is now "memdump"; "malfind2" is"malfind";"procdump" is"procexedump". The parameters these plugins accept have changed in some cases, too. For instance, "malfind" now uses "-D" instead of "-d" to specify its destination directory.

You can grab Volatility 1.4 RC1 by using SVN, pointing it to http://volatility.googlecode.com/svn/branches/Volatility-1.4_rc1. If you don't feel like installing Volatility 1.4 RC1 on your own, you can experiment with it on REMnux. REMnux is a lightweight Linux distribution for assisting malware analysts in reverse-engineering malicious software; itnow includes Volatility 1.4 RC1 and is available as a Live CD and a virtual appliance.


轉自 SANS

Four Rules for Investigators

For my Crash Course in Computer Forensics I came up with four rules to keep investigators out of big trouble. Obviously it's still possible to mess up when following these, but breaking any of these will be, to use the technical term, bad. What do you think? Are there are other cardinal rules for computer forensics? Do these apply to any other fields?

1. Have a plan - What are you looking for? How do you know when you're done? If you don't find what you're looking for, how long are you prepared to spend on the search? Your plan doesn't have to be set in stone. It can change based on things you find.

2. Have permission - You must have permission to look at the data in question and that authority must be granted by somebody who has the authority to do so. Sometimes this is cut and dry; the search warrant from the judge literally commands you to do something. But in a corporate environment it can be far more complicated.

3. Write down what you do - Take notes. Document what you're working on and what you do to it. Make and model names, serial numbers, locations, procedures, imaging techniques, write blockers. Any time you touch a piece of original evidence, write it down.

4. Work on a copy - Once you've imaged your original evidence, lock it up. Only work on copies. You can't break something you're not touching.

SUMURI Paladin 法證光盤測試手記

Sprite發了一篇測試文,大家看看吧!


SUMURI公司,Steve開發了一張基於Linux的取證光盤,具有很強的功能。可以進行磁盤鏡像,磁盤克隆,數據擦除,鏡像加載等等很多非常好的功能。在香港會議中,Sprite和Steve交流了很久,決定2011年共同推出一款具有中文界面,適應國內特點的取證光盤。目前這張光盤是免費的,可以從該公司網站下載。大家可以去http://www.sumuri.com /software/paladin-download.html下載來試用。只是英文的而已。


下載鏡像後,可以刻錄光盤,用於啟動PC和Intel架構的蘋果機。Sprite利用我的蘋果本中的Parallel虛擬機,加載這個ISO鏡像文件,給大家演示一下基本的鏡像功能。


光盤啟動後,出現語言選項。這個光盤是基於Ubantu建立的。我們這裡可以選擇英文或中文。

啟動界面,選擇第一個選項,啟動Paladin。

光盤經過幾十秒到1分鐘後,啟動到桌面畫面,首先是軟件使用法律聲明。OK確定同意。

通過光盤中的文件瀏覽器,可以看到嫌疑硬盤中的數據。本例中,可以看到Sprite製作Parallel Windows虛擬機中的目錄和文件。

請大家注意哦,這張光盤啟動嫌疑計算機後,是具有寫保護功能的,因此不會對嫌疑計算機中的數據造成修改。下面,我們調用Paladin Toolbox。

運行桌面左側的Paladin Toolbox,出現軟件畫面。這是軟件的核心內容。從中可以看到軟件的主要功能,鏡像,校驗,查找搜索,網絡加載等等。我們先測試一下鏡像功能。

參考上圖可以看到,源盤有一個選項,目標盤有兩個選項,即可以實現1對2的複製。目標可以選擇物理磁盤或鏡像文件。

鏡像文件支持Encase的e01,蘋果的DMG,Linux DD三種格式。

本測試中,Sprite選擇源盤為730MB的Paladin光盤,目標分別為e01和dd鏡像,保存於虛擬機的分區中。其中選擇的鏡像創建成功後開始校驗哈希。

總計鏡像過程大約2分30秒左右,速度還不錯。

之後開始校驗。校驗結束後,我們看到如下的報告:

這裡Sprite簡單測試一下,其他的功能後續再進行測試吧。

不過,這真是一個不錯的工具啊,雖然免費,但真的很強大。Paladin還有一款USB啟動盤,可以更為方便的啟動設備。但由於有成本,是收取一些費用的。需要的可以來北京找Sprite來複製使用。

今年的峰會2011年4月12-14日,該光盤的作者會來北京,他是將蘋果機取證的高手。前面博文中介紹的蘋果講師就是他啊。高手。



轉自計算機取證技術

網路釣魚之iPhone數位證據鑑識標準作業程序

中央警察大學資訊管理研究所 教授 林宜隆 

一、 前言
由於近幾年來網路釣魚越來越猖獗,被害的人數也持續在上升。網路釣魚為了取得個人身分資訊透過電話、電子郵件、即時通訊或傳真…等,來嘗試取他人身分資訊。這些網路釣魚行為大部分會以合法的企業組織(例如Yahoo、Google…etc)做為掩飾。最常看見的電子網路釣魚攻擊包含兩種,一種為電子郵件,另一種則是詐騙網頁。駭客通常是採用 HTML 格式的電子郵件包含公司標誌、色彩、圖形、字型樣式…等,而且主旨及內容通常是和帳戶驗證、安全性升級,及新產品或服務贈送等相關的詐騙信息,一般使用者通常根本分辨不出真假。電子郵件中的網頁連結大部分都具有和合法網站相同的外觀,使詐騙行為幾乎無法被偵測出來。依照統計,金融網站是釣魚駭客最喜歡攻擊的對象,因為可直接取得金錢。而花旗銀行、eBay、美國銀行及Paypal是前四大被攻擊的對象。

近年來智慧型手機已成為趨勢,多半智慧型手機都能連上網路瀏覽網頁等資訊,但這也造成許多安全上的問題。由於網路釣魚的氾濫及智慧型手機的迅速擴展,許多使用者很有可能隨時隨地在不知情的情況下利用手機上網而掉入駭客所設置的網路釣魚陷阱中。智慧型手機有許多種類(例如iPhone、HTC、Sony Ericsson…etc),其中由賈伯斯(Steve Jobs)為首的蘋果公司(Apple)所出產的iPhone最夯,相關的資訊安全問題也浮上檯面。


根據賽門鐵克在2010年12月份所有釣魚網站當中,釣魚網站在12月下降15%,而原因可歸咎於各類釣魚網站數的下降。但賽門鐵克發現,今年一月開始垃圾郵件似乎有重升的現象,原因來自於Rustock等殭屍網路的復活。最近幾個月,賽門鐵克發現一系列的釣魚網站都是針對知名社交網路所設計,駭客們利用幾個新的誘騙手法(Webcam釣魚網站),企圖誘拐消費者洩漏其個人機密資訊。需要有一套數位鑑識的標準作業流程來蒐集與分析這些釣魚網站,提供法官相關的數位證據,進一步來制止網路釣魚的氾濫。


二、 何謂網路釣魚

網路釣魚又稱為 Phishing(英文發音同Fishing),是一種「社交工程」(Social Engineering)的攻擊。網路釣魚是一種企圖從電子通訊中,偽裝成信譽卓著的法人媒體以獲得如用戶名、密碼和個人資訊的犯罪詐騙過程。典型的網路釣魚誘騙手段之一,是從電子郵件訊息開始。網路釣魚假冒銀行、信用卡公司或聲譽良好的線上商家 (例如Google、Yahoo、Microsoft、eBay…etc)可靠機構,向客戶發送看似正式的電子郵件通知,引導使用者點擊網釣者所製作外觀及與真正網站幾乎毫無差異的網頁,藉此來竊取受害者的身分資料及金融帳號及各類密碼等機密資料。


三、 何謂電腦數位鑑識及數位鑑識標準作業程序

數位鑑識(digital forensics)技術是關於如何在電腦內找到可供法庭上確認待證事實之證據資料;也就是將電腦內的相關數位資料加以保存、識別、擷取以及文件化之過程。透過嚴謹的鑑識程序,可使數位資料具備證據能力,進而提高證據證明力。換言之,數位鑑識之目的在於如何於蒐證過程中,確保數位證據(digital evidence)之「不可否認性」與「完整性」,使數位證據具有絕對證據能力,而能成為呈堂證據。

數位證據具有容易修改、不易個化、無法直接被理解,以及取證困難…等特性。專家需根據數位鑑識的標準作業程序,來解決數位證據易修改的問題,以確保原始資料及所擷取的證據之完整性。數位證據鑑識標準作業程序(Digital Evidence Forensics Standard Operating Procedure, DEFSOP)可分為以下四個階段:

  • 概念階段(Concept):
    主要是在於數位證據的取得須遵守合法、真實的原則, 當事人不得以非法侵入他人電腦資訊系統的方法獲取證據。取得的途徑必須以合法的規範,取得數位證據的程序及許可權。

  • 準備階段(Preparation):
    主要任務是做一些鑑識前的準備工作,並蒐集相關資料,為操作階段各程序執行預作準備。

  • 操作階段(Operation):
    這個階段又可以分為三個程序,分別為蒐集程序(Collection)、分析程序(Analysis)及鑑定程序(Forensics)。其中數位資料分為變動性數位資料、固定性數位資料及檔案系統數位資料,在「蒐集程序」中需考量數位資料的類型並選擇適當的工具來加以蒐集;「分析程序」需要分析的資料類型可分為一般檔案、(系統)記錄/稽核檔、各種日誌(系統日誌、事件日誌和安全日誌)、惡意程式碼…等;「鑑定程序」主要是資料萃取、比對及個化、重建犯罪現場。

  • 報告階段(Report):
    提供法院審判需要的相關資料,在此階段需特別注意報告的內容須呈現給非技術性人員作為法庭上的參考證物,所以鑑識人員須確實且詳細的呈現鑑識流程並盡量減少使用專有名詞。



圖1 數位證據鑑識標準作業程序架構

四. iPhone智慧型手機及鑑識軟體簡介

iPhone像其他電子產品一樣非常的複雜,是由許多晶片及其他電子元件所組成成的,其內部零件有CPU(架構是使用RISC)、記憶體、無線網路設備…等,簡單來說可以算是一台小型的嵌入式電腦。iPhone的鑑識流程需要由數位鑑識的鑑識流程來輔助。

iPhone智慧型手機可以透過許多的鑑識軟體工具(例如WOLF by Sixth Legion、Cellebrite UFED、Paraben Device Seizure…etc)來蒐集手機上的數位證據。iPhone沒有記憶卡的裝置,所以有些鑑識軟體只能透過「越獄」(JailBreaking)的方式來通過蘋果公司的驗證。iPhone鑑識軟體工具主要是蒐集Call Logs、SMS、Contacts、Email…等數位證據。 iPhone鑑識軟體工具蒐集數位資料的方法如下:

  • 直接擷取iPhone資料:主要是iPhone和電腦同步來回復檔案。
  • 備份及複製iPhone系統檔案:主要是直接查詢iPhone的資料庫,能夠還原更多被刪除的資訊,像是SMS。
  • 實體bit-by-bit複製:和一般電腦鑑識一樣,複製整個輔助記憶體的資訊,而不破壞原本的資料。

五. iPhone智慧型手機數位資料及數位證據標準作業程序

台灣學者林宜隆教授將智慧型手機的數位資料分成三個部分,變動性數位資料、固定性數位資料及檔案系統之數位資料,而將數位鑑識標準作業程序分為四個階段,分別為原理概念階段、準備階段、操作階段及報告階段,其中以操作階段最為重要,操作階段又分為蒐集、分析、鑑定三個部分。



圖2 智慧型手機數位資料分類

圖3 iPhone智慧型手機標準作業程序

六. iPhone智慧型手機與電腦數位鑑識標準作業程序之比較

iPhone 智慧型手機與一般電腦鑑識的標準作業程序其實差異不大,主要是在扣押的方式及操作階段有所不同。電腦鑑識在扣押的過程中主要是避免電腦內的數位資料受到更動及硬體的破壞,所以將電腦主機放置堅固的扣押箱內,但是智慧型手機扣押要注意的一點是,扣押箱必須有隔離訊號的功能,避免手機內部的資料因及時的訊號而被更動。

電腦鑑識在操作階段所要收集的資料和iPhone智慧型手機所要收集的資料及使用的鑑識軟體工具都不同。電腦鑑識主要蒐集的資料為「易失性」(例如記憶體、網路的連接資訊…etc)和「非易失性」(輔助記憶體)的資料,然而智慧型手機雖然也是蒐集「易失性」及「非易失性」(手機內部輔助記憶體及記憶卡)資料,但所存放的資料內容及結構都有所不同,所以有必要將兩者的標準作業程序獨立出來。


圖4 數位鑑識標準作業程序比較

七. 結論

由於近年來智慧型手機的迅速發展及擴展與網路釣魚氾濫及技術多變,網路釣魚的受害者逐年增加,導致個人資料外洩,因而受到損失及傷害。因此數位鑑識變得非常重要,藉由數位證據鑑識標準作業程序(Digital Evidence Forensics Standard Operating Procedure, DEFSOP)配合鑑識軟體來蒐集並分析電腦及智慧型手機上的資訊,並將分析的結果製做成鑑識報告以便法官審判有所依據。網路釣魚的技術越來越多變與智慧型手機的進步,這代表隨時都很有可能掉入網路釣魚的陷阱,經由數位鑑識相關流程與技術,可分析出網路釣魚的模式、行為以及相關的特徵,便可藉由這些資料來協助使用者避免落入網路釣魚的陷阱中,以達到預防的目的。相信數位鑑識的技術會不斷的進步,不管在電腦還是不斷改變的智慧型手機上,都可準確的蒐集並分析數位證據,來幫助受害者。


轉自 TWCERT/CC

Virtual Training Environment (VTE)!

Capture2011-1-28-上午 08.17.20
點選Category可選擇鑑識相關課程
https://www.vte.cert.org/vteweb/Library/Library.aspx


EnScript to parse LNK files into Excel - sortable on timestamps

The EnCase "Case Processor" EnScript includes a Link File Parser module that work fine, but does not produce a very efficient report. For example, if you want to quickly see all the LNK files that refer to object on removable media, you have to read through all the entries to find one that may be on a removable device. Also, there is no way to sort the data by the timestamps contained in the LNK file to build a timeline.

I wrote this EnScript several months ago for a specific need I had back then, but never had a chance to post it.

This EnScript requires Microsoft Excel be installed and it will parse all the LNK files in the case (no need to select). The data will be sent to Excel and a spreadsheet will automatically open, displaying the data. You can then easily sort on any field and quickly see the properties of each Link file.

Download Here


轉自ForensicKB