Computer Forensic Hard Drive Imaging Process Tree with Volatile Data



Computer Forensic Hard Drive Imaging Process Tree

Identifying Memory Images

Identifying Memory Images

Have you ever been given a memory image to examine and not known what OS it was? Or maybe you were told it was X when it was really Y? Or perhaps you have a collection of images that may not be labeled correctly?

So how do you figure out the OS of an unknown Windows image?


You could use strings to look for clues of the OS type. For example looking for the version numbers [1]. You can often find this in close proximity to a DLL name. Two examples (XP and Windows 7) are below:

Windows XP: 5.1.2600

2546060:5.1.2600.0 (xpclient.010817-1148)2546134:InternalName2546160:HCAppRes.dll2546194:LegalCopyright2546226: Microsoft Corporation. All rights reserved.2546322:OriginalFilename2546356:HCAppRes.dll

Windows 7: 6.1.7600.16385 (win7_rtm.090713-1255)

1335896:6.1.7600.16385 (win7_rtm.090713-1255)1335978:InternalName1336004:BlbEvents.dll1336038:LegalCopyright1336070: Microsoft Corporation. All rights reserved.

How do you determine if the memory image is from a x86 or x64 machine? Well, here you can look for environmental variables like PROCESSOR_ARCHITECTURE and PROCESSOR_ARCHITEW6432 (used for WOW64) [2]. An example from x86 and x64 machines:

=x86PROCESSOR_IDENTIFIER=x86 Family 6 Model 37 Stepping 5, GenuineIntel



More details about these variables can be found here.

Still, this is more labor intensive than it need be.


Remembering a blogpost Moyix wrote about finding kernel global variables in Windows I figured each OS would have a different size after the OwnerTag defined in wdbgext.h:


Moyix gives us the pattern to search for regarding x86 OSes since the end of LIST_ENTRY64 will be 0 for x86 machines [3]:


First let's try to find the sizes for each OS:

$ xxd xpsp3x86.dd |less[skip]0000b70: 6780 0000 0000 0000 0000 4b44 4247 9002 g.........KDBG..[skip]
$ xxd win7x86.dd |less[skip]0000bf0: ffff ffec 6fbb 83ec 6fbb 8300 0000 0000 ....o...o.......0000c00: 0000 004b 4442 4740 0300 0000 8084 8300 ...KDBG@........[skip]

After examining XP, W2K3, Vista, W2K8 and Windows 7 machines (and different service packs), this is what we get (Windows 2000 value not done personally, but taken from Moyix's blog [3]):

OS Size
Windows 2000 \x08\x02 XP \x90\x02 W2K3 \x18\x03 Vista \x28\x03 W2K8 \x30\x03 Windows 7 \x40\x03

Now we need to find the pattern for x64 systems as well. We could do this with a hexdump of memory images to find the KDBG pattern:

$ xxd win7x64.dd |less[skip]0000080: f8ff ff10 44a1 0200 f8ff ff4b 4442 4740 ....D......KDBG@0000090: 0300 0000 f080 0200 f8ff ff60 8f87 0200 ...........`....[skip]
$ xxd w2k8x64.dd |less[skip]0000f10: f8ff ff40 f878 0100 f8ff ff4b 4442 4730 ...@.x.....KDBG00000f20: 0300 0000 c060 0100 f8ff ff60 b865 0100 .....`.....`.e..[skip]

After examining several x64 dumps, the pattern that seemed universal to them was:


The header sizes also appear to remain the same for x64 and x86 machines. So there it is. You can search for a unique pattern in the memory image in order to figure out what OS it is. Some examples:

Windows 7x86: '\x00\x00\x00\x00\x00\x00\x00\x00KDBG\x40\x03'
W2K3 x86: '\x00\x00\x00\x00\x00\x00\x00\x00KDBG\x18\x03'
W2K8 x64: '\x00\xf8\xff\xffKDBG\x30\x03'

You could very easily write a Python script to identify Windows memory images using this technique, but you don't have to: This has already been incorporated into the Volatility 1.4 framework in the plugin. Thanks to Mike Auty (ikelos) for doing the honors :-)


[1] List of Windows Versions

[2] HOWTO: Detect Process Bitness

[3] Finding Kernel Global Variables in Windows


Decoding Prefetch Files for Forensic Purposes

Prefetch files can be a great source of evidence in a forensic investigation.The purpose of this article is to explore the many different forensic artifacts that can be discovered from Windows prefetch files. The first section will briefly cover the prefetch file and the prefetching process. The second section, will discuss the forensic values of the prefetch file, specifically the forensic artifacts the prefetch file contains, and the story that can be revealed by the mere existence or absence of prefetch files. The article will conclude with some examples of how you can use prefetch files to aid in forensic analysis and what to watch out for when using prefetch files to prove or disprove a case.

The main purpose of this article is to explain the use of prefetching in forensic analysis, but it is important to have a baseline understanding of the technology to provide a good foundation for how and why prefetch files contain certain artifacts. The prefetching process utilized by Microsoft was created to speed up the Windows operating system and application startup. The prefetching process occurs when the operating system, specifically the Windows Cache Manager, monitors certain elements of data that are extracted from the disk into memory. This monitoring occurs each time the system is started for the first two minutes of the boot process, then sixty seconds after all the Win32 services have completed their startup, and the first ten seconds after an application is executed. The Cache Manager then records these “faults” and works with the Task Scheduler, which after some pre-processing will write the data to files called prefetch files.1 The purpose is for these files and their locations to be readily available and consolidated prior to being demanded. Windows prefetching is the process of the operating system moving data from the hard drive into memory before it is needed. For example, when a user executes notepad.exe, the Cache Manager will look in the prefetch directory to see if a prefetch file exists for that application. If a prefetch file does exist the Cache Manger will notify the NTFS operating system to read the notepad.exe prefetch file, extract the Master File Table (MFT) metadata, and open any directory or file referenced in that prefetch file.

Windows Prefetching Background
Windows prefetching started with Windows 2003 Server and Windows XP. Windows Vista took the prefetch file one step further with the creation of the superfetch file. Superfetch was an enhancement to XP’s prefetching by creating a profile of the applications that show how, when, and how often you use the particular application.

There are three types of prefetch files: boot trace, application, and hosting application. Each prefetch file type has a specific independent purpose. The boot trace prefetch file’s main purpose is to help speed up the operating system when it’s being started or rebooted. The application prefetch file was created with the intent of speeding up the time it took for Windows to load certain applications. These applications include all native Windows applications, such as notepad, cmd.exe, and any third party applications that run on Windows, such as Adobe Reader, Firefox, and Microsoft Word. The last type of prefetch file is the hosting application prefetch file, which records the trace activity of certain programs that are used to spawn system processes. These programs that start other processes include DLLHOST.exe, RUNDLL32.exe, and MMC.exe. Windows needs a way to keep track of the different programs that can start multiple different processes, which is why they are categorized separately as hosting applications.2

Prefetch files are located in the prefetch folder found under C:\Windows\. This location is the same for all current systems that use prefetching technology. The contents of the prefetch directory are different for each of the Windows operating systems. Windows 2003 Server only contains one prefetch file called a Boot Trace prefetch file. Windows XP contains not only prefetch files, but also a file called layout.ini. The layout.ini file is a list of the contents of the prefetch files, specifically the NTFS/MFT log sections that contain a list of files and their logical locations or paths. The entries in the layout.ini file are organized in the order in which they are loaded. The entries in the layout.ini file will then be moved or “reallocated” to a contiguous section of the hard drive, which will result in a faster recall time by the operating system. The process of moving the physical location of the files located in the layout.ini file occurs about every seventy-two hours when the Task Scheduler executes the defragmenter. The focus of the defragmenter is only on the contents of the layout.ini file and not the whole disk drive. Since these files are now physically located contiguously on the drive they will be read much faster.

The naming convention is unique for each of the three types of prefetch files mentioned above: boot trace, application, and hosting application. Since there is only one boot trace prefetch file its name will be static, NTOSBOOT-B00DFAAD. NTOSBOOT is short for NT Operating System Boot, which is used by the Windows operating system when the system is booting up. This prefetch file is always named the same with the trailing hash BAADF00D, which is used to represent uninitialized data. This is the largest of the prefetch files.

The application prefetch file is the most common and most familiar prefetch file that also produces the most forensic value. The naming convention for this prefetch file uses the name of the application that was executed and its extension (i.e. cmd.exe), followed by a thirty-two bit hash or number represented in hexadecimal, with a “.pf” extension. An example is The trailing hash values are the results of a calculation that includes the algorithm PI (3.14159) as a seed for randomizing, plus the number 37, in addition to the file’s path where it was executed.3 This is what allows the same file to create two separate prefetch files when executed from two separate locations. It is possible to have two files executed from the same location on two different computer systems with the same full prefetch file name.
The application hosting prefetch file calculates the trailing hash value a little differently than the application prefetch file. As previously referenced, the executed file’s name and extension are used in the first part of the prefetch file. The trailing hash value is calculated using the application’s path of execution and the command line used to start the application. This method was utilized to allow multiple application hosting files, such as DLLHOST.EXE, which are used to spawn many different processes that can coexist in the same prefetch folders under different names.
The prefetch files are considered data files. The construct of the prefetch file consists of two main sections, the file’s metadata, (the top part), and the NTFS/MFT file log, the bottom section of the file. The file’s metadata contains the application or program’s name, timestamps, and the number of times the file was executed. The timestamps that are recorded are the file’s creation time, modification, and last accessed time. These timestamps are recorded in GMT. The number of times the application was executed is incremented by one each time the file is started. If the prefetch file is deleted the run count will start over with the creation of a new prefetch file. This top portion of the file is not legible without a parsing tool. The second section, NTFS/MFT file log is written in ASCII and is legible, but still easier to read if parsed out. These files and directories are trace files that are used by the application when it is loading. This mapping of files will include system files, application specific files, and events that are interpreted by the application that is started. For example, the name of a document that is interpreted by Microsoft Word. The size of this section will vary for each prefetch file. Figure 1 shows the contents of a prefetch file. This is the view of the file when viewing it with Guidance Software’s EnCase4 forensic tool. There are several tools that can be used to parse prefetch files and some of these tools will be discussed in the sections below.
Contents of a Prefetch File
Click for larger image.

Figure 1: Contents of a Prefetch File

In addition to the cleanup or file re-allocation that the Task Scheduler performs on the files located in the layout.ini file, the operating system also performs a cleanup process on the prefetch directory itself. The Windows XP operating system will only retain 127 prefetch files, while Windows 7 will retain 129. After the maximum number is met, no new prefetch files will be created. Sometime after thirty minutes of reaching the maximum number of files in the prefetch folder, the system will purge all but thirty-two of these prefetch files. Testing did not show favoritism over the type of files that were retained versus being purged, but Windows 7 seemed to retain application hosting files, while Windows XP only retained application prefetch files. Repetitive testing also showed that on some occasions, Windows XP retained only 126 files and then other times it retained 129. Both Windows XP and 7 retained the NTOSBOOT prefetch file.
The Forensic Value of Prefetch Files
So what is the forensic value of the prefetch file? If you use Google to search for prefetch files, approximately the first fifty hits are websites telling users that they should delete the prefetch files to help speed up their computer. This information is obviously incorrect since the main purpose of the prefetch file is to speed up the loading of user applications. Without even intending to do so, prefetch files can sometimes answer the vital questions of computer forensic analysis: who, what, when, where, why, and sometimes even how.

The forensic value of the prefetch files will be examined from two different perspectives:
  1. The contents of the prefetch file
  2. The creation of the existence of the prefetch file in the prefetch directory

The content of each prefetch file provides rich information about the applications that were executed. There are two main sections of the prefetch file. The top, or first section, of the prefetch file contains the metadata of the file. The metadata includes the file name, file location, associated timestamps (file created, last accessed, and file modified), and the number of times the file was executed. This information will be expanded on in the section below. The second, or bottom, section of the prefetch file includes a ten second snapshot of files that are associated with the executed file when it was first opened. This information will also be expanded on below.
Parsed PRefetch file using Prefetch_info.exe
Click for larger image.

Figure 2: Parsed Prefetch file using Prefetch_info.exe

Figure 2 shows a prefetch file after being parsed by the tool Prefetch_info.exe.5 With the use of a parser the data can be easily interpreted. In this example the name of the file that was executed was cmd.exe, which created the prefetch file The associated timestamps shown below are all listed in UTC. Figure 2 also shows the program cmd.exe was executed fifteen times and the location in which the file cmd.exe was executed, \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\CMD.EXE, which equates to the \Windows\System32\ directory.
The forensic value of the contents of this file is immediately obvious. From the file metadata an examiner can identify that cmd.exe was executed, the location, and frequency. These artifacts might answer the “what” and the “where” of an incident. The number of times executed will increment each time the application is run. The timestamp information indicates when the first time the application was executed and when it was last accessed, or executed. This might answer the “when” some activity of interest occurred. Any file that is configured to automatically “autostart” will not register a prefetch file when it is created. If the prefetch file is deleted from the prefetch folder, both the timestamps and the number of times executed will be reset.
The second half of the prefetch file is written in plain text, but it can be challenging to read. Tools such as, BinText6 or Prefetch_info.exe, can organize the content making it easier to read and to identify artifacts of interest.
The value of browsing all the locations for the source of where an application was executed can reveal hidden or obfuscated directory locations. As highlighted below in Figure 3, the prefetch file for excel.exe shows the file one.xls located in a TrueCrypt volume. Since TrueCrypt has the ability to hide directories from view, finding the path listed in a prefetch file can provide a data source that might not otherwise be identified. By just browsing the contents of prefetch files it is possible to identify an obfuscated directory, such as C:\WINDOWS\System32\WiQZC\hidden\hacking\tools\nc.exe. Often, hackers will hide tools in plain sight in unusual directories in the System32 folder. The System32 directory is a folder that contains many programs used by the operating system. Most users do not browse this directory.
Identifying Hidden or Obfuscated locations
Click for larger image.

Figure 3: Identifying Hidden or Obfuscated locations

The full directory path in the prefetch file can also provide any user accounts listed under the Documents and Settings (Windows XP) or Users folder (Vista/Windows 7). This could reveal a temporary account used for malicious activity by showing programs that were executed sometime in the past by a potential unauthorized user. This may answer the “who” question for a forensic exam, or at least narrow the scope. Figure 4 shows file activity from the user account “adnin”. This account may be malicious and try to disguise itself as the legitimate account “admin”. Analyzing the full paths in the prefetch files can show that an application or file was accessed from an external storage device. The external storage device entries will differentiate from those of a hard drive with an entry such as \DEVICE\HARDDISK\DP(1)0-0+D\ instead of just having \DEVICE\HARDDISKVOLUME1\. As long as the external device in question was not subsequently inserted into the computer re-writing the last access time, the last access time in the prefetch file can be used to coordinate with the timestamps in the USBStor registry key. Once identified via matching timestamps, the USBStor registry key entry will contain the serial number of the device in question. This can broaden the scope of forensic analysis to other devices that need to be seized and analyzed. Identifying unaccounted USB storage devices and applications or files accessed on those USB devices might help in answering the “what” and “why” questions.
Identifying abnormal accounts
Click for larger image.

Figure 4: Identifying abnormal accounts

Prefetch files can also reveal whether file “time stomping” might have occurred. When hackers compromise a system and alter the timestamps of an application or tool, they might not be aware of what information is captured in a prefetch file. For instance if the Standard Information Attribute (SIA) and File Name Attribute (FNA) timestamps are modified in the Master File Table (MFT) to impede analysis, the entries in the prefetch files for those applications that were executed will reveal the actual timestamp when the application was first and last executed, completely circumventing the “time stomping” efforts. Just the existence of a prefetch file for the tool used to perform the time stamp manipulation would reveal nefarious activity.
Click here for part 2 of this article.
  2. Help file from PFDump V2.2 – Enpack created by Dominik Weber
  3. - by Yogesh Khatri
  5. by Mark McKinnon

Part 2 of this article will demonstrate what the existence of the prefetch file itself can tell you. Examining the contents of the prefetch directory can provide a storyline of activity on a computer system because the prefetch file captures the activity of applications that were first or subsequently executed. By using a tool, such as Guidance Software’s EnCase12 you can extract the prefetch files and just view the file’s creation or last access time stamp. First and foremost, the existence of the prefetch file shows that a certain application not only existed on the computer, but has at one time been executed. By sorting the entries by file creation or last access time it is possible to see what applications were executed on the system and to see what activity might have occurred on the system. or WinPrefetch View,

For instance, the entries in Figure 1 show that on April 9, 2010, two separate cmd.exe programs were executed. After the second cmd.exe ( was executed the application CONSENT.exe was executed (as shown by, which indicates the computer system is a Vista or Windows 7 system. The consent.exe program is the popup window that is presented to the user when requesting a program that requires administrator access, such as the MMC.exe application, which was executed ten seconds after CONSENT.exe. The presence of the prefetch files indicates that on April 9, 2010, at 1:16 PM two instances of CMD.exe were executed from different locations, followed by the execution of the program MMC.exe. This event spawned the execution of CONSENT.exe (this file will be executed first before MMC.exe even though chronologically MMC.exe was executed first). The MMC program is the Microsoft Management Console program and used to manage user accounts, Windows Events logs, disk management, and other management programs. Figure 1 also shows that the application PSEXEC.exe was executed, which is a command-line tool that allows a user to execute commands remotely on a computer system.

Analyzing the Prefetch Folder
Click for larger image.

Figure 1: Analyzing the Prefetch Folder

So what can prefetch files tell you? The existence of two prefetch files with the same application prefix and different trailing hashes would be indicative of two files (i.e. I) that were executed from two different locations. The eight-character hash that exists in the prefetch file’s name is based on the location from which the application was executed. In this example, a rogue CMD.exe was executed from a different location than Windows\System32. This scenario can also detect a possible malware infection in which the malware was executed in one location, say the desktop or temp directory, then removed itself from the original location and placed a copy in Windows\System32, then re-executed itself once it changed locations. This would cause the creation of two instances of the same prefetch file prefix with two different eight-character trailing hashes. If during a forensic exam there are two prefetch files located with different trailing hashes, and the examiner needs to determine the location the file was executed from, the examiner can reverse engineer the location through trial and error. There is no magic algorithm that will allow you to plug in a formula and reproduce the path from which the application was executed. However, since the eight-character hash was created from an algorithm using the executed file’s location you can take any file, rename it to the prefix of the prefetch file (i.e. calc.exe), and place it in different suspected directories. Then execute the file and monitor the prefetch directory until the trailing hash file matches. This process is very time consuming so it is wise to focus on suspect directories.

The number and type of prefetch files in the prefetch directory can also reveal information about the individual who is using the computer system. The operating system will reduce the number of prefetch files once a certain number is met. The number of prefetch files can reveal a few different items.
  1. The system is relatively new and only a few different applications have been executed on the system. This situation is typical of a normal home user. They may only use about ten to fifteen programs over time.
  2. The system has been used extensively, and either over a short or long period of time the user(s) have executed many different programs. The timestamps and number of times the application was executed will provide background information on the duration and frequency these applications have been used.
  3. The type of applications that have been executed can also help in profiling the user’s technical capabilities. For instance, by identifying the type of programs the individual executes, the analyst can determine if the user is highly technical (for example if there are prefetch files for programming tools such as Python and Perl or technical programs such as IdaPro and VMWare.) The presence of hacker tools, such as nmap, Metasploit, or netcat could easily reveal the nature and intent of a computer user. On the other hand if the user is only using Internet web browsers, mail clients, and social networking software (i.e. Yahoo, Microsoft’s Instant Messenger) then you get a better profile of the type of computer user.

Here are some more practical forensic examples of how the prefetch file can be used to aid a forensic exam:
  • A simple scenario is where network logs show that system PC-A was scanning system PC-B with a tool such as Nessus. When the local administrators asked the user of PC-A about the activity he denied the allegations and even said that they could search his system for the tool Nessus if they wanted to. The seemingly savvy user had not only removed the Nessus tool after its use but also used a tool such as BCWipe to overwrite all unallocated space. What the user of system PC-A didn’t realize is that when he executed Nessus a prefetch file was created capturing the first time and last time the file was executed, the number of times it was run, and the location from which it was executed. These timestamps should correlate with the network logs and any activity recorded on system PC-B. The other valuable artifact is the prefetch file for the wiping tool BCWipe. The same type of incriminating information is contained in the BCWipe prefetch file.
  • From a forensic standpoint a prefetch file can be used to show that an employee who denied obtaining a salary spreadsheet actually did open a Microsoft Excel file named ABCorp_2010_Salaries.xls on their computer, which was located on an external thumb drive. For this to occur the employee would have to have opened the file by double clicking on the spreadsheet to open the file.

While there are many different tools that can be used to analyze prefetch files, three of the most useful tools to date are Prefetch_info.exe3 (Prefetch _parse_gui.exe) by Mark McKinnon, WinPrefetch View by NirSoft, and the EnCase EnScript PFDump4 (V2.2) created by Dominik Weber.

Prefetch_info.exe is a Windows command line tool that neatly parses out both the file’s metadata (time stamps), and the NTFS/MFT file log. Prefetch_info.exe can only be run on one prefetch file at a time. This tool can quickly return results on a prefetch file of interest.

The second tool by Mark McKinnon, Prefetch_parse_gui.exe is a graphical based tool that analyzes a whole directory of prefetch files. NirSoft’s WinPrefetch View is modularized with the top section listing each prefetch file along with all its associated metadata. The bottom section displays the NTFS/MFT log data for the prefetch entry that is selected in the top section. Figure 2 shows the interface for WinPrefetch View. By default this tool will read the prefetch files of the local computer system. The Advanced Options entry under the Options tab allows you to select another location where prefetch files might have been extracted out of an image.
The metadata shown below can be sorted by columns and any results of interest can be exported to HTML reports.

WinPrefetch View
Click for larger image.

Figure 2: WinPrefetch View

The most extensive analytical prefetch tool seen so far is Dominik Weber’s PFDump EnScript. The EnScript will identify all the prefetch files on the loaded hard drive and identify if the prefetch file is a hosting application prefetch file or a regular application prefetch file. If no entries are selected all of the files with the “.pf” extension will be processed. There are two options on the main page, Toggle MFT processing for selected files, and Toggle hash verification for selected files. The Toggle MFT processing for selected files allows the option to extract and process any Master File Table record information that is located within the prefetch file.

EnCase’s Console will provide a status of the EnScript’s operation, while the prefetch artifacts for the selected files are placed in EnCase’s Bookmark section. Figure 3 shows the options available when analyzing identified application hosting prefetch files, and the output of an identified command line used to start compmgmt.msc. When working with application hosting files, by default PFDump will try many different standard command line options that the hosting application might have used to execute the process. Identifying how a process of interest was started and the options used might prove useful during forensic analysis. There is also an entry to insert a suspected command line option that might have been used to start a process. This can be used to verify a command line option that might have been discovered in unallocated space.

PFDump EnScript Hosting Application Entries and Output

PFDump EnScript Hosting Application Entries and Output
Figure 3: PFDump EnScript Hosting Application Entries and output

If the prefetch files have been purposefully or systematically deleted through routine maintenance, there is still a chance to recover prefetch files of interest. Common sense in computer forensics states that any file that has been deleted can be recovered as long as the file has not been overwritten. The same rule holds true for prefetch files. A common method to search for and extract files is to search for a file’s header. Since every file has a distinguished file header, we can search through unallocated space looking for the specified prefetch file header. That header in ASCII is “….SCCA”. In hexadecimal, the prefetch file is represented as “11 00 00 00 53 43 43 41”. Once the file has been identified it can be carved out and analyzed with one of the aforementioned tools. Since prefetch files do not have file footers it is okay if extra data is carved out when extracting a potential prefetch file. Any excess data will be easily recognized and discarded.

When analyzing prefetch files there are a few items to note: When certain applications are executed and are in an “open state” the prefetch file will not be created until the application is closed. For instance, if the application netcat was executed for the first time on June 14th, at 13:00:00, but the file was not shut down until June 15th, at 15:00:00, the prefetch file will not be created until the netcat application is closed, twenty-six hours later than when it was first executed. This delay in file creation will throw off timeline analysis. Programs that are located in a user’s Startup directory will not create a prefetch file.

When performing an Internet search for prefetch files, many of the initial findings are telling users to remove the prefetch files to speed up their computers. This may not be a sign of anti-forensics. The lack of prefetch files may be due to the system’s registry key settings, “Enable Prefetcher,” which might have been modified to disable prefetching. Below is the registry key that controls what actions the operating system will take with regard to prefetching. By default Windows XP, Vista, and Windows 7 have a value of “3,” which has both application and boot prefetching enabled. On Windows 2003 systems the default value is “2,” which is why there is no application prefetching.

  • HKLM\System\CurrentControlSet\Control\SessionManager\MemoryManagement\Prefetch paramters
    • Value: 0 “zero” = Prefetching is disabled
    • Value: 1 = Application Prefetching is enabled
    • Value: 2 = Boot Prefetching is enabled
    • Value: 3 = Both application & boot prefetching is enabled5

The existence of a prefetch file for Windows Defragmenting tools, DRAG.exe and DFRNTFS.exe also does not necessarily indicate a user removing prefetch files, or defragging their computer to cover up some malicious activity. The Windows operating system, specifically the Task Scheduler, will start the defrag process to reallocate entries in the Layout.ini file. When this occurs a new prefetch file will be created, DFRNTFS.exe and DEFRAG.exe. If these prefetch files already existed the run count will be incremented by one each time it was run.

This article reveals the many different forensic artifacts that can be recovered from prefetch file analysis while conducting forensic analysis. Whether prefetch file analysis can help in an investigation depends on the type of forensic investigation that is being conducted.


Mark Wade is a Digital Forensic Analyst with Harris Corporation (Crucial Security Programs), performing digital forensics for a Federal Law Enforcement agency as a government contractor. Mark has been engaged in computer/network security for the past twelve years with specific focus in penetration testing, IDS and firewall management, incident response, malware analysis, and most recently spent the last three years conducting computer forensics. E-mail:


對於預測2010年的資安威脅報告(Security Threat Report:Trends for 2010)

對於預測2010年的資安威脅報告(Security Threat Report:Trends for 2010)

時序歲末,又將是新的一年即將來到。這個時節,也正是年度資安威脅報告統計分析成果公佈的時節,對於來年也會做些趨勢的推估、預測。這邊特別蒐集了一些有 公佈安威脅報告的資料,大多為一年前所公佈的,個人反到偏愛以回顧的方式來看這些威脅報告,經過一年的風風雨雨,回顧起來更有感觸。如果再多看一些陳年的 年度報告,也許會覺得當時做資安算是簡單的哩!!怎當年也是搞得暈頭轉向咧~~XD

1. 找到資安防禦方向或重點,對資安廠商來說掌握趨勢,也掌握先機。
2. 年度威脅報告,乃資安廠商投資、建置、蒐集、統計、分析而來的,有的是資安廠商蒐集相關網路資訊而來的,要寫出一份好的報告,說簡單也不簡單,但要寫出一份切中未來趨勢,就有其難處,由這也可以看出其對威脅報告用心。
3. 驗證目前防禦現況,是否符合內部的威脅分析報告或數據。
4. 我要打十個。
5. 將部分主題內容,作為一般教育訓練教材之方向,並加強宣導資安觀念。很多攻擊手法都是針對「人」來著,需要修正的是「人」的觀念。

1. 夾帶置入性行銷內容。
2. 威脅報告內容不夠全面性,不夠客觀,容易陷入迷思。這點確實如此,看威脅報告時,還要注意公佈資安廠商的屬性,畢竟分析內容還是在其專業領域中。
3. 沒有內部威脅分析報告,也沒關心外在風雨,無從驗證。
4. 不知道內容都在講啥。
5. 該做的都做都做不完了,哪還有時間管新的問題。

無論是啥理由,對於 2010 年度的報告或預測,這裡整理一些網路上蒐集到的,可以快速檢視一下,也許會有些不同防禦想法或創意發想的:

McAfee Lab: 2010 年資安威脅預測

  1. 在使用者不斷成長的同時,社交網路(如 Facebook)將會面臨更多複雜的資安威脅。
  2. Facebook 及其他網路服務應用程式的爆發式成長將會成為網路罪犯的理想目標所在,這些攻擊將會利用好友信任關係去點選一般使用者可能會小心防範的網路連結。
  3. HTML 5 將會弭平桌上型及網路應用程式的界限。這也會讓像是 Google Chrome OS 作業系統成為惡意程式作者「獵殺」一般使用者的新潛在危機。
  4. 電子郵件的附件成為傳送惡意程式途徑已行之有年,但它仍然持續增加;不斷地愚弄著企業、媒體記者及一般使用者,下載木馬及其他惡意程式。
  5. 由於微軟產品的普及,網路罪犯長久以來都是挑它們下手。但在 2010 年,可以預期的是 Adobe 軟體將成為矚目焦點,特別是 Acrobat Reader 和 Flash 兩項。
  6. 針對銀行設計的木馬程式將會變得更聰明,可能會攔截合法的交易而進行未經授權的提款。
  7. 僵屍網路 (Botnet) 是網路罪犯的先進基礎架構,常被罪犯利用盜用身來分發送垃圾郵件。近來成功關閉了一些僵屍網路,已迫使這些主事者另尋替代方案;包含點對點傳輸設定 (peer-to-peer setups) 在內,能用來下指令的資訊安全漏洞已愈來愈少。
  8. 雖然僵屍網路遍布全球,但我們預測仍能在 2010 年在與網路罪犯的爭鬥中取得一定程度的勝利。
資料來源:McAfee 2010 威脅預測 @2009/12/xx

M86 Security™ 發佈2010年資安威脅報告:

  1. 殭屍網路(Botnet)將更形複雜
  2. 假冒安全軟體(Scareware)持續增加
  3. 中毒的搜尋引擎結果
  4. 網站感染的進化
  5. SaaS及雲端服務的定位前景
  6. 運用協力廠商應用程式
  7. 國際網域名稱濫用
  8. 攻擊應用程式開發介面
  9. 網址縮短服務隱藏邪惡手段
資料來源:M86 Security 新聞 @2009/12/04


  1. 網際網路基礎架構的變遷為網路犯罪者開啟了更多的機會。
  2. 網路犯罪者將利用社交媒體與社交網路混入使用者的「信賴圈」。
  3. 全球性疫情將逐漸絕跡,地區性或鎖定特定對象的攻擊將成長。
  4. 以下是2010年以及未來的一些主要預測
  • 「錢」是一切的誘因,網路犯罪不可能消失。
  • Windows 7 將會帶來一些影響,因為,其預設的安全等級比 Vista 稍低。
  • 就算使用非主流的瀏覽器或作業系統也無法避免風險。
  • 惡意程式每幾個小時就會變形一次。
  • 強制性感染已經成為常態,只要瀏覽到惡意網站就會感染。
  • 未來將出現針對虛擬化與雲端運算環境的攻擊方式。
  • Bot 程式將永遠存在,無法根除。
  • 企業網路/社交網路的資料外洩事件仍將持續發生。
資料來源:網路資訊雜誌 @2009/12/16


  1. 來自檔案共享網路的攻擊增加
  2. 透過P2P網路,散播流行性的惡意程式將大規模的增加
  3. 來自網路罪犯的流量持續競爭
  4. 偽防毒軟體的詐欺程式衰退
  5. 針對Google Wave的攻擊
  6. 針對iPhone與Android移動裝置平台的攻擊將有所增加
「在2010年,特定的惡意軟體開發者將越漸成熟,並需要更大量的資源,以對抗防毒軟體公司。應用程式的弱點,仍會是網路罪犯的首要目標。而最 終,我相信即時搜尋引擎、黑帽搜尋引擎最佳化技術(Black Hat Search Engine Optimization)與社交網路將會成為網路犯罪的重點。」
資料來源:卡巴斯基 新聞 @2009/12/22


  1. 更多防毒廠商高舉雲端防護的旗幟-雲端運算的風暴即將來襲。
  2. 排山倒海襲來的惡意軟體-現行流通的惡意軟體數量將會持續成長。
  3. 社交陷阱-透過搜尋引擎(比方說搜尋引擎優化的欺騙手段)和社群網站上,以網頁強迫下載的方式來讓使用者受到感染。
  4. Windows 7作業系統-Windows 7在未來兩年內勢必會掀起更多的惡意攻擊浪潮。
  5. 手機-如果幾年以後只剩下2~3家比較知名的手機平台,並且人們開始使用手機轉帳服務的話,到時我們再來談手機平台上的網路犯罪潛力也還不遲。
  6. 蘋果電腦-預計2010年將有更多設計專屬的惡意軟體去攻擊該平台的作業系統。
  7. 雲端運算服務-針對雲端基礎設備或服務的攻擊方式將越來越有可能實現。
  8. 網路戰爭-政治動機的網路攻擊,可能影響經濟或關鍵設施。
資料來源:Panda新聞 @2009/12/23

SophosLabs 安全威脅報告:2010年

  1. Social networking:社群網路勢不可檔,做好衝擊因應。
  2. Data loss and encryption:第一步如何在資料遺失前對其進行加密,第二步如何控制使用者處理資料。
  3. Web threats:Web仍是惡意程式最大散播媒介,而偽防毒軟件與SEO惡意軟件興風作浪。
  4. Email threats:惡意郵件不死,透過郵件附件與嵌入連結方式散播從未停歇。
  5. Spam:受控制的殭屍電腦仍是最大發信來源。IM及社群網路散播居次。論壇與部落格的留言要注意。
  6. Malware trends:有著巨大的地下經濟利益。Adobe Reader(PDF)成為攻擊目標。知名蠕蟲-Conficker。
  7. Windows 7:提供一個更加安全的環境,但仍有改善的地方。
  8. Apple Macs:調查有69%的 Mac 使用者並未安裝防毒軟體。但跨平台的 PDF 漏洞卻充斥著。
  9. Mobile devices:智慧型手機帶來惡意程式的隱憂。
  10. Cybercrime:經過十年已經發展成巨大的地下經濟體,而個資與信用卡資料是黑市熱買商品。
  11. Cyberwar and cyberterror:民生重大基礎建設(如水力、電力...等)可能透過網路遭受遠端攻擊、控制或破壞等。此亦為未來一大隱憂。
資料來源:SophosLabs @2010/1/xx


  1. 僵屍網路與惡意程式持續增加
  2. 社交網路安全威脅急遽攀升
  3. 第三方遊戲與應用程式威脅猶如不定時炸彈
  4. 短網址服務成為駭客的最愛工具
  5. 中文及東亞語系垃圾郵件量比例升高
  6. 雲端運算的安全議題
資料來源:Cellopoint 新聞 @2010/1/4


  1. 隨Web 2.0而衍生的攻擊將更趨成熟而普及。
  2. 殭屍病毒橫行且相互搶占地盤、火藥味濃厚。
  3. “假借熟識者的偽Email”因攻擊成功率高,再度成為駭客使用媒介首選。
  4. 針對微軟為目標的攻擊事件,目前預測鎖定Windows 7和IE 8。
  5. 小心點選搜尋結果。
  6. 駭客也花錢下廣告?!假廣告隱藏真危機,網友要小心!。
  7. Mac作業系統能夠一如既往面對威脅卻全身而退嗎?2010年會證明,「不可能!」。
資料來源:Websense安全實驗室 新聞資料室 @2010/1/11


  1. 防毒已經不夠了
  2. 社交工程 (Social Engineering) 將成為主要的攻擊媒介
  3. 流氓安全軟體廠商將變本加厲
  4. 社群網路第三方應用程式將成為詐欺的目標
  5. Windows 7將成為攻擊者鎖定的對象
  6. 即融殭屍網路(Fast Flux Botnets)將增加
  7. 網址縮短服務將成為網路釣魚者的幫兇
  8. Mac 和手機惡意軟體將會增加
  9. 垃圾郵件作者不再墨守陳規
  10. 隨著垃圾郵件作者對環境的適應度,垃圾郵件的數量將持續波動起伏
  11. 專業化的惡意軟體
  12. CAPTCHA技術將有所改進
  13. 即時通訊垃圾郵件
  14. 非英文的垃圾郵件數量將持續增加
  1. 2010是「刪除年」
  2. 2010是終止累積備分磁帶作長期保留之用的一年
  3. 無所不在的重複資料刪除
  4. 產業競爭將推動標準化軟體的發展
  5. 移轉的一年
  6. 虛擬化的腳步將超越x86的範圍
  7. 雲端儲存迎頭趕上
  8. 雲端儲存推動資料管理
  9. 企業組織已經無法拖延「綠色」運動
資料來源:賽門鐵克 新聞 @2010/2/25

CSA (Cloud Security Alliance):Top Threats to Cloud Computing V1.0

  1. Abuse and Nefarious Use of Cloud Computing(濫用或非法使用雲端運算)
  2. Insecure Interface and APIs(不安全的介面與 API)
  3. Malicious Insiders(惡意的內部人員)
  4. Shared Technology Issues(共享技術產生的議題)
  5. Data Loss or Leakage(資料遺失或外洩)
  6. Account or Service Hijacking(帳號或服務劫持)
  7. Unknown Risk Profile(未知的風險):使用者無法了解雲端所使用的網路架構、系統架構、軟體版本等重要資訊,亦無法進行安全評估。
資料來源:Top Threats to Cloud Computing @2010/3/xx


  1. 隨著0Day漏洞的增加,網站掛馬依然是黑客入侵的主要方式,可以預見2010年下半年網站掛馬的數量將會大量增加。
  2. 第三方軟件漏洞將成為黑客利用的主要途徑,近年來頻頻發生的第三方軟件安全漏洞多為黑客所利用,也造成了巨大的影響,
  3. 無線攻擊將會成為黑客攻擊的新目標,目前已經出現的無線破解技術,可以讓攻擊者免費蹭網。 今後無線攻擊將會給用戶帶來更大的威脅。
  4. 病毒將越來越頑固,2010年上半年出現了大量的惡意IE圖標病毒,因其為註冊鍵值,所以傳統的殺毒軟件將病毒清除後並不能刪除惡意的IE圖標。
  5. 修改主引導記錄的病毒將增多,病毒利用此技術可以先加載病毒文件再啟動操作系統,使反病毒軟件難以查殺。
  1. 反病毒廠商將根據計算機病毒整體的變化來修改雲計算的算法,用以更快的發現小範圍內傳播的計算機病毒,使雲計算更加合理。
  2. 反病毒產品將加強自身及用戶系統的保護,防止主引導記錄被修改等,以弱化惡意代碼取得系統控制權的能力。
  3. 反病毒產品將提升綜合處理能力,盡量全面的清除病毒並將系統恢復到正常工作狀態。
資料來源:安天實驗室 病毒预警 @2010/7/6

IBM X-Force 2010 年中趨勢與風險報告:

  1. 攻擊者逐漸使用諸如 Javascript Obfuscation 與其他的轉換技術,這讓 IT 安全專業人員備感頭痛。Obfuscation 是一種供軟體開發人員與攻擊者之類用來隱藏或遮蔽所開發應用程式碼的技術。
  2. 向來不勝枚舉的安全漏洞舉報增加了 36%。在 2010 年,我們看到大量的安全漏洞揭露,這是因為公開漏洞利用發佈數目大量增加,以及多家大型軟體公司致力於識別及降低安全漏洞。
  3. PDF 攻擊持續增加,因為攻擊者找到新方法詐騙使用者。要瞭解 PDF 為何成為攻擊目標,您可以思考一下,在企業組織中端點通常是最弱的鏈結,而攻擊者當然知道這個事實。例如,在特定端點上可能沒有機密資料,但該端點可能會 存取具有機密資料的端點,或者,該端點可以作為特定的彈跳點,以啟動對其他電腦的攻擊。
  4. Zeus BotNet 工具箱持續嚴重摧殘組織,Zeus BotNet 套件在 2010 年初發行更新版 Zeus 2.0,此版本中的主要新功能為攻擊者提供更新功能
資料來源:IBM X-Force 威脅報告 @2010/8/xx


  • 地下經濟(Botnet、黑市交易...)已然成型,正視它吧。
  • 有意圖(政治、經濟、商業、機密資料...等目的)的攻擊行動,這才是比較棘手的。
  • 新技術(新作業系統、智慧型手機、雲端...等)的出現,永遠都是新戰場,黑白兩道都想插旗子。
  • 電子郵件與 Web 一直都是惡意程式散播的超強媒介,但目前都看到電子郵件與 Web 有開始轉型的跡象。(fb 可能推出的新世代電子郵件,Web 有另類 APP 或雲端的轉型概念)
  • 熟悉使用各瀏覽器特性,培養建全資安意識,點滴打造安全瀏覽環境,將是網路使用者持續努力的長久課題。
  • 企業對於機敏資料的保護,同樣是持續努力的長久課題。
  • 漏洞永遠存在,如何在惡意利用前找到或發現,如何在惡意利用當下阻擋或發掘,如何在惡意利用後察覺、反應、應變,沒有解決方案,只有「面對它、接受它、處理它、放下它」。
  • 社群網路將使攻擊手法更貼切、平易近人,形成特定、細膩化的攻擊。若使用者無法在社群網路上控管好個人隱私,哪又如何寄望使用者能對企業機敏資料善盡保管之責呢?

轉自 資安之我見

模糊 hash


Getting Started with ssdeep

This document provides an introduction to using ssdeep and was last updated on The current version of this document can be found on the ssdeep web site at
This guide starts with an explanation of the basic functions of ssdeep and then gives some examples of using fuzzy hashing in real world situations.

Installing ssdeep

Microsoft Windows

Users running Microsoft Windows are strongly encouraged to download the precompiled binaries from Please note that these binaries are created using a MinGw cross compiler. Compiling the programs directly from Windows is not supported.

Automatic Installation

Before you try to install ssdeep manually, see if your operating system support s the programs via an automatic installation method. Some operating systems that provide this feature for ssdeep are:
Linux: Ubuntu, Debian

Manual installation

If your operating system does not support the automatic installation methods described above, you will have to download the source code and compile the programs yourself. First download the latest tarball of the program from This file should be named something like ssdeep-2.2.tar.gz. Uncompress the file with the following command:
$ tar zxvf ssdeep-2.4.tar.gz
Change into the decompressed directory
$ cd ssdeep-2.4 and configure the program. $ ./configure The configure script can accept lots of options. Run ./configure --help for the complete list. The most common option used is the prefix option which installs the program in a location other than the default, /usr/local/bin. If you wanted to install the program elsewhere, for example, /tmp/ssdeep, you would run ./configure --prefix=/tmp/ssdeep instead.
You can now compile the program using the make command:
$ make and install it: $ make install Note that you must be root on most operating systems to install the program to its default location, /usr/local/bin. The tool sudo may help: $ sudo make install

Basic Operation

By default, ssdeep generates context triggered piecewise hashes, or fuzzy hashes, for each input file. The output is proceeded by a file header. C:\temp> ssdeep config.h INSTALL doc\README
Notice how the above output shows the full path in the filename. You can have ssdeep print relative filenames instead of absolute ones. That is, omit all of the path information except that specified on the command line. To enable relative paths, use the -l flag. Repeating our first example with the -l flag: C:\temp> ssdeep -l config.h INSTALL doc\README
You can have ssdeep only print out the basename of each file it processes. That is, all directory information will be stripped off. To enable basename mode, use the -b flag: C:\temp> ssdeep -b config.h INSTALL \doc\README

Error messages

If no input files are specified, an error is displayed. C:\temp> ssdeep
ssdeep: No input files
Although some programs process standard input and thus allow you to pipe the output of other programs to them, ssdeep does not support this functionality. If an input file can't be found, an error message is normally printed. These, and all other error messages, can be surpressed by using the -s flag. C:\temp> ssdeep doesnotexist.txt
ssdeep: C:\temp\doesnotexist.txt: No such file or directory
C:\temp> ssdeep -s doesnotexist.txt

Recursive Mode

Normally, attempting to process a directory will generate an error message. Under recursive mode, ssdeep will hash files in the current directory and file in subdirectories. Recursive mode is activated by using the -r flag. C:\temp> ssdeep *
ssdeep: C:\temp\backups Is a directory
ssdeep: C:\temp\www Is a directory

C:\temp> ssdeep -r *

Matching mode

One of the more powerful features of ssdeep is the ability to match the hashes of input files against a list of known hashes. Because of inexact nature of fuzzy hashing, note that just because ssdeep indicates that two files match, it does not mean that those files are related. You should examine every pair of matching files individually to see how well they correspond.
Here's a simple example of how ssdeep can match files that are not identical. We take an existing file, make a copy of it, and append a single character to it.
$ ls -l foo.txt
-rw-r--r-- 1 jessekor jessekor 240 Oct 25 08:01 foo.txt

$ cp foo.txt bar.txt
$ echo 1 >> bar.txt

A cryptographic hashing algorithm like MD5 can't be used to match these files; they have wildly different hashes.
$ md5deep foo.txt bar.txt
7b3e9e08ecc391f2da684dd784c5af7c /Users/jessekornblum/foo.txt
32436c952f0f4c53bea1dc955a081de4 /Users/jessekornblum/bar.txt

But fuzzy hashing can! We compute the fuzzy hash of one file and use the matching mode to match the other one.
$ ssdeep -b foo.txt > hashes.txt
$ ssdeep -bm hashes.txt bar.txt
bar.txt matches foo.txt (64)
The number at the end of the line is a match score, or a weighted measure of how similar these files are. The higher the number, the more similar the files.

Source Code Reuse

As a more practical example of ssdeep's matching functionality, you can use ssdeep's matching mode to help find source code reuse. Let's say we have two folders, ssdeep-1.1 and md5deep-1.12 that contain the source code for each of those tools. You can compare their contents by computing fuzzy hashes for one tree and then comparing them against the other: C:\> ssdeep -lr md5deep-1.12 > md5deep-hashes.txt

C:\>ssdeep -lrm md5deep-hashes.txt ssdeep-1.1
ssdeep-1.1\cycles.c matches md5deep-1.12\cycles.c (94)
ssdeep-1.1\dig.c matches md5deep-1.12\dig.c (35)
ssdeep-1.1\helpers.c matches md5deep-1.12\helpers.c (57)
Ta da! You can see that I reused code from the md5deep project when writing ssdeep.

Truncated Files

Along with source code reuse, you can also use fuzzy hashing to find truncated files. Here's a sample using a fake filename. We'll compute the fuzzy hash for the file, make a copy that contains only the first 29% of the original, and then try to match the truncated version back to the original.
$ ls -lsh
-rwxr-xr-x 1 jvalenti users 699M Sep 29 2006 all-the-kings-men.avi

$ ssdeep -b all-the-kings-men.avi > sig.txt

$ cat sig.txt

$ dd if=all-the-kings-men.avi of=partial.avi bs=1m count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 14.510224 secs (14452926 bytes/sec)

$ ls -lsh partial.avi
-rw-r--r-- 1 jvalenti users 200M Oct 6 06:40 partial.avi

$ ssdeep -bm sig.txt partial.avi
partial.avi matches all-the-kings-men.avi (57)

Needles in a Haystack

You can also compare many without writing out any hashes to the disk using two different methods. Let's say that we have a whole bunch of files in two or three directories and want to know which ones are similar to each other. We can use the -d mode to display these matches. The switch causes ssdeep to compute a fuzzy hash for each input file and compare it against all of the other input files.
In this example, we've gathered a whole bunch of Microsoft Word documents in the folders Incoming, Outgoing, and Trash. Rather than go through all of the documents, it would be nice to eliminate those are substantially the same.
C:\temp> ssdeep -lrd Incoming Outgoing Trash
Incoming\Budget 2007.doc matches Outgoing\Corporate Espionage\Our Budget.doc (99)
Incoming\Salaries.doc matches Outgoing\Personnel Mayhem\Your Buddy Makes More Than You.doc (45)
Outgoing\Plan for Hostile Takeover.doc matches Trash\DO NOT DISTRIBUTE.doc (88)
Oh my!
The -p mode works similarly, but displays the results in a slightly nicer format. If there are two input files A and B that match, the -d mode will only display that "A matches B." The -p mode will display that "A matches B," skips a line, and then "B matches A." This greatly increases the length of the output, but can make files easier to find. Here's the above input again, this time using the -p flag.
C:\temp> ssdeep -lrp Incoming Outgoing Trash
Incoming\Budget 2007.doc matches Outgoing\Corporate Espionage\Our Budget.doc (99)

Incoming\Salaries.doc matches Outgoing\Personnel Mayhem\Your Buddy Makes More Than You.doc (45)

Outgoing\Corporate Espionage\Our Budget.doc matches Incoming\Budget 2007.doc (99)

Outgoing\Personnel Mayhem\Your Buddy Makes More Than You.doc matches Incoming\Salaries.doc (45)

Outgoing\Plan for Hostile Takeover.doc matches Trash\DO NOT DISTRIBUTE.doc (88)

Trash\DO NOT DISTRIBUTE.doc matches Outgoing\Plan for Hostile Takeover.doc (88)

Comparing Files of Signatures

After you've generated several files of fuzzy hashes you may wish to compare those signatures to each other. You can compare one or more files of signatures against each other using the -x flag.
$ ssdeep -r /etc > list1.txt
$ ssdeep -r /usr > list2.txt
$ ssdeep -lr ./known_malware > list3.txt
$ ssdeep -x list1.txt list2.txt list3.txt

list1:/etc/rcc.d/init.d matches list3:./known_malware/wlk_rootkit/dropper (86)

list3:./known_malware/wlk_rootkit/dropper matches list1:/etc/rcc.d/init.d (86)

The above method compares all of the signatures against each other. This can take some time, especially if the files are large. If you'd rather compare some unknown signatures against a set of known signatures, you can use the -k flag. Let's say you have some signatures for malicious programs, badfiles.txt and worsefiles.txt. You then compute the fuzzy hashes for programs on some workstations, which are saved to comp1.txt, comp2.txt, and comp3.txt. You can compare these unknowns to the knows like this:
C:\> ssdeep -k badfiles.txt -k worsefiles.txt comp1.txt comp2.txt comp3.txt

comp1.txt:WINWORD2.EXE matches badfiles.txt:some_trojan.exe (84)

comp3:txt:ntoskrrnl.exe matches worsefiles.txt:delete_all_data.exe (77)

Flash cookies: new threats to Internet Privacy

For sites and advertisers, through HTTP cookies to obtain information is not under way are welcome. Now users have found ways of avoiding them. According 布鲁斯施奈尔 news, site developers now have a better way. Although it is still seen as a Cookie, but it is different.

LSO, a bigger and better cookie

Under similar HTTP cookie, the local shared object (LSO) or call Flash cookie is stored on our information and track activities on the Internet way. On this project, I understand the information are:

· Flash cookie can accommodate up to 100 kilobytes of data, and a standard HTTP Cookie is only 4 kilobytes.

· Flash cookie expiration time is not the default.

· Flash cookie will be stored in different locations, making them difficult to find.

YouTube's Test

LSO is also very difficult to remove. An example can be shown here. Visit the YouTube site, open a video, adjust the volume. Delete all cookies and close the Web browser. Re-open the web browser and play the same video. Please note that sound does not revert to default settings. This proves that Flash cookie has not been removed, also play a role.

Very few people know the existence of Flash cookies, so this is a problem. It makes those cookies on the Web browser to control user access to a false sense of security. As mentioned above, privacy controls will not affect the Flash cookies.

Where they are stored in

Flash cookies used. Sol extension. But even know this, I still can not find any on my computer there is a sign. Thank you, Google (using Flash cookies), I found information about Flash cookies only way the website is Flash player.

The following pictures from the Flash player, site, showing the storage area settings. Label is displayed visited the site (total 200), all save the location of Flash cookies. If you want to delete the words of the label is also the location of such operations.

Flash cookies very popular

Another of Google search results brought me to a University of California at Berkeley study. Researchers on the top 100 sites in the Flash cookies and privacy applications to conduct the survey. The results showed:

* 100 sites in 54 using Flash cookies.

* This site uses 54 157 Flash cookies, produced a record 281 individuals Flash cookies.

* 100 sites in 98 using HTTP cookies. This web site produced 98 3602 HTTP cookies record.

* 31 of these sites use the TRUSTe privacy program logo. 31 of 14 applications had Flash cookies.

* 100 sites, only four use Flash as a tracking mechanism.

It appears that many sites use both HTTP and Flash cookies. It was very confusing to the researchers. After extensive analysis, they find out the reason, rebirth (respawning).

Flash cookie rebirth

University of California, Berkeley, researchers found that the time you close the browser HTTP cookies can be removed using Flash cookie information in the rewrite (born again):

"We found several web site was rewritten HTTP cookie situation. At on a SpecificClick Flash cookie rewrite a deleted SpecificClick HTTP cookie. The same situation occurs in the on a QuantCast Flash cookie rewrite a deleted QuantCast HTTP cookie. "

The researchers also found that Flash cookies can be restored, and is not limited to the same domain of HTTP cookies:

"We also found that cross-domain HTTP Cookie can be rewritten. For example, third-party ClearSpring the Flash cookie can override meet the requirements HTTP cookie. ClearSpring also can override the requirements found and HTTP cookies. "

Its function become more powerful

Not long ago, Google started to use them that they would not use behavioral targeting (BT) technology, I have written an article. In the article, I mentioned that online advertising to promote Association (NAI). About 30 companies using BT technology. Under pressure, the association created a opt-out page, to avoid being tracked becomes very simple.

The researchers found that, cookie out of the settings are incomplete. NAI's sites are created the Flash cookies. The report mentioned a specific event:

"We found that NAI QuantCast the cookie set on withdrawal, Flash cookies still in use. In the cookies are deleted, Flash cookie also allows rewrite QuantCast HTML cookie. It does not choose to rewrite the exit Cookie. Therefore, users select After the track is still out there. "

Some solutions

In order to avoid save Flash cookies, on the need to use the Global Storage Settings Manager to enter setup options, as shown below to remove the "allow third-party Flash content stored on your computer" option.

In this case, we can avoid Flash cookies are installed into the system. The irony is that we must accomplish this in the Flash site operation.

In tests, researchers used the Mozilla Firefox browser. In the report, they mentioned a Firefox plug-BetterPrivacy, you can turn off the time to delete all flash cookies. The other plug-ins can Ghostery network to track the hidden script, and issued a warning.


I think that being anonymous on the Internet to track the time has passed. If this technology really is harmless, please join the consent of the options.

轉自 Softcov