Saturday, September 05, 2015

Registry Analysis

I gave a presentation on Registry analysis at the recent HTCIA2015 Conference, and I thought that there were some things from the presentation that might be worth sharing.

What is Registry analysis?  
For the purposes of DFIR work, Registry analysis is the collection and interpretation of data and metadata from Registry keys and values.

The collection part is easy...it's the interpretation part of that definition that is extremely important.  In my experience, I see a lot of issues with interpretation of data collected from the Registry.  The two biggest ones are what the timestamps associated with ShimCache entries mean, and what persistence via a particular key path really means.

Many times, you'll see the timestamps embedded in the ShimCache data referred to as either the execution time, or "creation/modification" time.  Referring to this timestamp as the "execution time" can be very bad, particularly if you're using it to demonstrate the window of compromise during an incident, or the time between first infection and discovery.  If the file is placed on a system and timestomped prior to being added to the ShimCache, or the method for getting it on the system preserves the original last modification time, that could significantly skew your understanding of the event.  Analysts need to remember that for systems beyond 32-bit XP, the timestamp in the ShimCache data is the last modification time from the file system metadata; for NTFS, this means the $STANDARD_INFORMATION attribute within the MFT record.

Ryan's slides include some great information about the ShimCache data, as does the original white paper on the subject.

With respect to persistence, I see a lot of write-ups that state that malware creates persistence by creating a value beneath the Run key in the HKCU hive, and the write-up then states that that means that the malware will be started again the next time the system reboots.  That's not the case at all...because if the persistence exists in a user's hive, then the malware won't be reactivated following a reboot until that user logs in.  I completely understand how this is misinterpreted, particularly (although not exclusively) by malware analysts...MS says this a lot in their own malware write-ups.  While simple testing will demonstrate otherwise, the vast majority of the time, you'll see malware analysts repeating this statement.

The point is that not all of the persistence locations within the Registry allow applications and programs to start on system start.  Some require that a user log in first, and others require some other trigger or mechanism, such as an application being launched.  It's very easy...too easy...to simply make the statement that any Registry value used for persistence allows the application to start on system reboot, because there's very little in the way of accountability.  I've seen instances during incident response where malware was installed only when a particular user logged into the system; if the malware used a Registry value in that user's NTUSER.DAT hive for persistence, the system was rebooted, and the user account was not used to log in, then the malware would not be active.  Making an incorrect statement about the malware could significantly impact the client's decision-making process (regarding AV licenses), or the decisions made by regulatory or compliance bodies (i.e., fines, sanctions, etc.).

Both of these items, when misinterpreted, can significantly impact the overall analysis of the incident.

Why do we do it?
There is an incredible amount of value in Registry analysis, and even more so when we incorporate it with other types of analysis.  Registry analysis is rarely performed in isolation; rather, most often, it's used to augment other analysis processes, particularly timeline analysis, allowing analysts to develop a clearer, more focused picture of the incident.  Registry analysis can be a significant benefit, particularly when we don't have the instrumentation in place that we would like to have (i.e., process creation monitoring, logging, etc.), but analysts also need to realize that Registry analysis is NOT the be-all-end-all of analysis.

In the presentation, I mention several of the annual security trend reports that we see; for example, from TrustWave, or Mandiant.  My point of bringing these up is that the reports generally have statistics such as dwell time or median number of days to detection, statistics which are based on some sort of empirical evidence that provides analysts with artifacts/indicators of an adversary's earliest entry into the compromised infrastructure.  If you've ever done this sort of analysis work, you'll know that you may not always be able to determine the initial infection vector (IIV), tracking back to say, the original phishing email or original web link/SWC site.  Regardless, this is always based on some sort of hard indicator that an analyst can point to as the earliest artifact, and sometimes, this may be a Registry key or value.

Think about it...for an analyst to determine that the earliest data of compromise was...for example, in the M-Trends 2015 Threat Report, 8 yrs prior to the team being called in...there has to be something on the system, some artifact that acts as a digital muddy boot print on a white carpet.  The fact of the matter is that it's something that the analyst can point to and show to another analyst in order to get corroboration.  This isn't something where the analysts sit around rolling D&D dice...they have hard evidence, and that evidence may often be Registry keys, or value data.

No comments: