Friday, December 28, 2012

Malware Detection

Corey recently posted to his blog regarding his exercise of infecting a system with ZeroAccess.  In his post, Corey provides a great example of a very valuable malware artifact, as well as an investigative process, that can lead to locating malware that may be missed by more conventional means. 

This post isn't meant to take anything away from Corey's exceptional work; rather, my intention is to show another perspective of the data, sort of like "The Secret PoliceMan's Other Ball".  Corey's always done a fantastic job of performing research and presenting his findings, and it is not my intention to detract from his work at all.  Instead, I would like to present another perspective, utilizing Corey's work and blog post as a basis, and as a stepping stone.

The ZA sample that Corey looked at was a bit different from what James Wyke of SophosLabs wrote about, but there were enough commonalities that some artifacts could be used to create an IOC or plugin for detecting the presence of this bit of malware, even if AV didn't detect it.  Specifically, the file "services.exe" was infected, an EA attribute was added to the file record in the MFT, and a Registry modification occurred in order to create a persistence mechanism for the malware.  Looking at these commonalities is similar to looking at the commonalities between various versions of the Conficker family, which created a randomly-named service for persistence.

From the Registry hives from Corey's test, I was able to create and test a RegRipper plugin that do a pretty good job of filtering through the Classes/CLSID subkey (from the Software hive) and locating anomalies. In it's original form, the MFT parser that I wrote finds the EA attribute, but doesn't specifically flag on it, and it can't extract the shell code and the malware PE file (because the data is non-resident).  However, there were a couple of interesting things I got from parsing the MFT...

If you refer to Corey's post, take a look at the section regarding the MFT record for the infected services.exe file.  If you look at the time stamps and compare those from the $STANDARD_INFORMATION attribute to those of the $FILE_NAME attribute that Corey posted, you'll see an excellent example of file system tunneling.  I've talked about this in a number of my presentations, but here's a pretty cool to see an actual example of it.  I know that this isn't really "outside the lab", per se, but still, it's pretty cool to this functionality as a result of a sample of malware, rather than a contrived exercise.  Hopefully, this example will go a long way toward helping analysts understand what they're seeing in the time stamps.

Corey also illustrated an excellent use of timeline analysis to locate other files that were created or modified around the same time that the services.exe file was infected.  What the timeline doesn't show clearly is that the time stamps were extracted from the $FILE_NAME attribute in the MFT...the $STANDARD_INFORMATION attributes for those same files indicate that there was some sort of time stamp manipulation ("timestomping") that occurred, as many of the files have M, A, and B times from 13 and 14 Jul 2009.  However, the date in question that Corey looked at in his blog post was 6 Dec 2012 (the day of the test).  Incorporating Prefetch file metadata and Registry key LastWrite times into a timeline would a pretty tight "grouping" of these artifacts at or "near" the same time.

Another interesting finding in analyzing the MFT is that the "new" services.exe file was MFT record number 42756 (see Corey's blog entry for the original file's record number).  Looking "near" the MFT record number, there are a number of files and folders that are created (and "timestomped") prior to the new services.exe file record being created.  Searching for some of the filenames and paths (such as C:\Windows\Temp\fwtsqmfile00.sqm), I find references to other variants of ZeroAccess.  But what is very interesting about this is the relatively tight grouping of the file and folder creations, not based on time stamps or time stamp anomalies, but instead based on MFT record numbers.

Some take-aways from least what I took away...are:

1. Timeline analysis is an extremely powerful analysis technique because it provides us with context, as well as an increased relative level of confidence in the data we're analyzing.

2. Timeline analysis can be even more powerful when it is not the sole analysis technique, but incorporated into an overall analysis plan.  What about that Prefetch file for services.exe?  A little bit of Prefetch file analysis would have produced some very interesting results, and using what was found through this analysis technique would have lead to other artifacts that should be examined in the timeline.  Artifacts found outside of timeline analysis could be used as search terms or pivot points in a timeline, which would then provide context to those artifacts, which could then be incorporated back into other analyses.

3. Some folks have told me that having multiple tools for creating timelines makes creating timelines too complex a task; however, the tools I tend to create and use are multi-purpose.  For example, I use (I also have a 'compiled' EXE) for Prefetch file analysis, as well as parsing Prefetch file metadata into a timeline.  I use RegRipper for parsing (and some modicum of analysis) of Registry hives, as well as to generate timeline data from a number of keys and value data.  I find this to be extremely valuable...I can run a tool, find something interesting in a data set as a result of the analysis, and then run the tool again, against the same data set, but with a different set of switches, and populate my timeline.  I don't need to switch GUIs and swap out dongles.  Also, it's easy to remember the various tools and switches because (a) each tool is capable of displaying its syntax via '-h', and (b) I created a cheat sheet for the tool usage.

4.  Far too often, a root cause analysis, or RCA, is not performed, for whatever reason.  We're losing access to a great deal of data, and as a result, we're missing out on a great deal of intel.  Intel such as, "hey, what this AV vendor wrote is good, but I tested a different sample and found this...".  Perhaps the reason for not performing the RCA is that "it's too difficult", "it takes too long", or "it's not worth the effort".  Well, consider my previous post, Mr. CEO...without an RCA, are you being served?  What are you reporting to the board or to the SEC, and is it correct?  Are you going with, "it's correct to the best of my knowledge", after you went to "Joe's Computer Forensics and Crabshack" to get the work done?

Now, to add to all of the above, take a look at this post from the Sploited blog, entitled Timeline Pivot Points with the Malware Domain List.  This post provides an EXCELLENT example of how timeline analysis can be used to augment other forms of analysis, or vice versa.  The post also illustrates how this sort of analysis can easily be automated.  In fact, this can be part of the timeline creation mechanism....when any data source is parsed (i.e., browser history list, TypedUrls Registry key, shellbags, etc.) have any URLs extracted run in comparison to the MDL, and then generate a flag of some kind within the timeline events file, so that the flag "lives" with the event.  That way, you can search for those events (based on the flag) after the timeline is created, or, as part of your analysis, create a timeline of only those events.  This would be similar to scanning all files in the Temp and system32 folders, looking for PE files with odd headers or mismatched extensions, and then flagging them in the timeline, as well.

Great work to both Corey and Sploited for their posts!

Friday, December 21, 2012

Are You Being Served?

Larry Daniel recently posted to his Ex Forensis blog regarding a very interesting topic, regarding "The Perils of Using the Local Computer Shop for Computer Forensics".  I've thought about this before...when I was on the ISS ERS (and later the IBM ISS ERS) team, on more than one occasion we'd arrive on-site to work with another team, or to take over after someone else had already done some work.  In a couple of instances, I worked with other teams that, while technically skilled, were not full-time DFIR folks.  Larry's post got me to thinking about who is being asked to perform DFIR work, and the overall effect that it has on the industry.

There's a question that I ask myself sometimes, particularly when working on I doing all I can to provide the best possible product to my customers?  As best I can, I work closely with the customer to establish the goals of the exam, to determine parameters of what they are most interested in.  I do this, because like most analysts, I can spend weeks finding all manner of "interesting" stuff, but my primary interest lies in locating artifacts that pertain to what the customer's interested in, so that I can provide them with what they need  in order to make the decisions that they need to make.  As much as I can, I try to find multiple artifacts to clearly support my findings, and I avoid blanket statements and speculation, as much as I can.

Also, something that I do after every exam is take a look at what I did and what I needed to do, and ask myself if there's a way I could do it better (faster, more comprehensive and complete, etc.) the next time.

Let's take a step away from DFIR work for a moment.  Like many, I make use of other's services.  I own a vehicle, which requires regular upkeep and preventative maintenance.  Sometimes, if all I need is an oil change, I'll go to one of the commercial in-and-out places, because I've looked into the service that they provide, what it entails, and that's all I need at the moment.  However, when it comes to other, perhaps more specialized maintenance...brake work, inspections recommended by the manufacturer, as well as inspections of a trailer I own...I'm going to go with someone I know and trust to do the work correctly.  Another thing I like about working with folks like this is that we tend to develop a relationship where, if during the course of their work, they find something else that requires my attention, they'll let me know, inform me about the issue, and let me make the decision.  After all, they're the experts.

Years ago...1992, in fact...I owned an Isuzu Rodeo.  I'd take it to one of the drive-in places to get the oil changed on a Saturday morning.  The first time I took it to one place, I got an extra charge on my bill for a 4-wheel drive vehicle.  Hold on, I said!  Why are you adding a charge for a 4-wheel drive vehicle, when the vehicle is clearly 2-wheel drive?  The manager apologized, and gave me a discount on my next oil change.  However, a couple of months later, I came back to the same shop with the same vehicle and went through the same thing all over again.  Needless to say, had I relied on the "expertise" of the mechanics, I'd have paid more than I needed to, several times over.  I never went back to that shop again, and from that point on, I made sure to check everything on the list of services performed before paying the bill.

Like many, I own a home, and there are a number of reasons for me to seek services...HVAC, as well as other specialists (particularly as a result of Super Storm Sandy).  I tend to follow the same sort of path with my home that I do with my vehicles...small stuff that I can do myself, I do.  Larger stuff that requires more specialized work, I want to bring in someone I know and trust.  I'm a computer nerd...I'm not an expert in automobile design, nor am I an expert in home design and maintenance.  I can write code to parse Registry data and shell items, but I am not an expert in building codes.

So, the question I have for you, reader, is do you know that you're getting quality work?  To Larry's point, who are you hiring to perform the work? 

At the first SANS Forensic Summit, I was on a panel with a number of the big names in DFIR, several of whom are SANS instructors.  One of the questions that was asked was, "what qualities do you look for in someone you're looking to hire to do DFIR work?"  At the time, my response was simply, "what did they do last week?"  My point was, are you going to hire someone to do DFIR work, if last week they'd done a PCI assessment and the week prior to that, they'd performed a pen test?  Or would you be more likely to hire someone who does DFIR work all the time? I stand by that response, but would add other qualifications to it.  For example, how "tied in" are the examiners?  Do they simply rely on the training they received at the beginning of their careers, or do they continually progress in their knowledge and education?  Do they seek professional improvement and continuing education?  More importantly, do they use it?  Maybe the big question is not so much that the examiners do these things, but do their managers require that the examiners do these things, and make them part of performance evaluations?

Are you being served?

Addendum:  Why does any of this matter?  So what?  Well, something to consider is, what will a CEO be reporting to the board, as well as to the SEC?  Will the report state, "nothing found", or worse, will the report be speculation of a "browser drive-by"?  In my experience, most regulatory organizations want to know the root cause of an issue (such as a compromise or data leakage)...they don't want a laundry list of what the issue could have been.

In addition, consider the costs associated with PCI (or any other sensitive information) data theft; if an organization is compromised, and they hire the local computer repair shop to perform the "investigation", what happens when PCI data is discovered to be involved, or potentially involved?  Well, you have to go pay for the investigation all over again, only this time it's after someone else has come in an "investigated", and this is going to have a potentially negative effect on the final report.  I think plumbers have a special fee for helping folks who have already tried to "fix" something themselves.  ;-)

Look at the services that you currently have in your business.  Benefits management.  Management of a retirement plan.  Payroll.  Do you go out every month and select the lowest bidder to provide these services?  Why treat the information security posture of the your organization this way?

Saturday, December 15, 2012

There are FOUR lights!

Okay, you're probably wondering what Picard and one particular episode of Star Trek TNG has to do with forensicating.  Well, to put it quite simply...everything!

I recently posted the question in a forum regarding Shellbag analysis, and asked who was actually performing it as part of their exams.  One answer I got was, "...I need to start."  When I asked this same question at the PFIC 2012 conference of a roomful of forensicators, two raised their hands...and one admitted that they hadn't done so since SANS training.

I've seen during exams where the shellbags contain artifacts of user activity that are not found anywhere else on the system.  For example, I've seen the use Windows Explorer to perform FTP transfers (my publisher used to have me do this to transfer files), where artifacts of that activity were not found anywhere else on the system.  When this information was added to a timeline, a significant portion of the exam sort of snapped into place, and became crystal clear.

Something I've seen with respect to USB devices that were connected to Windows systems is that our traditional methodologies for parsing this information out of a system are perhaps...incomplete.  I have seen systems where some devices are not so much identified as USB storage devices by Windows systems (rather, they're identified as portable devices...iPods, digital cameras, etc.), and as such, starting by examining the USBStor subkeys means that we may miss some of these devices that could be used in intellectual property theft, as well as the creation and trafficking of illicit images.  Yet, I have seen clear indications of a user's access to these devices within the shellbags artifacts, in part because of my familiarity with the actual data structures themselves.

The creation and use of these artifacts by the Windows operating system goes well beyond just the shellbags, as these artifacts are comprised of data structures known as "shell items", which can themselves be chained together into "shell item ID lists".  Rather than providing a path that consists of several ASCII strings that identify resources such as files and directories, a shell item ID list builds a path to a resource using these data structures, which some in the community have worked very hard to decipher.  What this work has demonstrated is that there is a great deal more information available than most analysts are aware.

So why is understanding shell items and shell item ID lists important? Most of the available tools for parsing shellbags, for example, simply show the analyst the path to the resource, but never identify the data structure in question...they simply provide the ASCII representation to the analyst.  These structures are used in the ComDlg32 subkey values in the NTUSER.DAT hive on Windows Vista and above systems, as well as in the IE Start Menu and Favorites artifacts within the Registry.  An interesting quote from the post:

Of importance to the forensic investigator is the fact that, in many cases, these subkeys and their respective Order values retain references to Start Menu and Favorites items after the related applications or favorites have been uninstalled or deleted.

I added the emphasis to the second half of the quote, because it's important.  Much like other artifacts that are available, references to files, folders, network resources and even applications are retained long after they've been uninstalled or deleted.  So understanding shell items are foundational to understanding larger artifacts.

But doesn't stop with the item ID lists are part of Windows shortcut (LNK) files, which means that they're also part of the Jump Lists found on Windows 7 and 8.

Okay, but so what, right?  Well, the SpiderLabs folks posted a very interesting use of LNK files to gather credentials during a pen test; have any forensic analysts out there seen the use of this technique before?  Perhaps more importantly, have you looked for it?  Would you know how to look for this during an exam?

Here's a really good post that goes into some detail regarding how LNK files can be manipulated with malicious intent, demonstrating how important it is to parse the shell item ID lists.

So, the point of the graphic, as well as of the post overall, is this...if you're NOT parsing shellbags as part of your exam, and if you're NOT parsing through shortcut files as part of your root cause analysis (RCA), then you're only seeing three lights.

There are, in fact, four lights.

DOSDate Time Stamps in Shell Items
ShellBag Analysis, Revisited...Some Testing

Thursday, November 29, 2012

Forensic Scanner has moved

In order to be in line with other projects available through my employer, the Forensic Scanner has moved from Google Code to GitHub. When you get to the page, simply click the "Zip" button and the project will download as a Zip archive.

There has been no change to the Scanner itself.

Also, note that the license has changed to the Perl Artistic License.

Monday, November 26, 2012

The Next Big Thing

First off, this is not an end-of-year summary of 2012, nor where I'm going to lay out my predictions for 2013...because that's not really my thing.  What I'm more interested in addressing is, what is "The Next Big Thing" in DFIR?  Rather than making a prediction, I'm going to suggest where, IMHO, we should be going within our community/industry.

There is, of course, the CDFS, which provides leadership and advocacy for the DFIR profession.  If you want to be involved in a guiding force behind the direction of our profession, and driving The Next Big Thing, consider becoming involved through this group.

So what should be The Next Big Thing in DFIR?  In the time I've been in and around this profession, one thing I have seen is that there is still a great deal effort directed to providing a layer of abstraction to analysts in order to represent the data.  Commercial tools provide frameworks for looking at the available (acquired) data, as do collections of free tools.  Some tools or frameworks provide different capabilities, such as allowing the analyst to easily conduct keyword searches, or providing default viewers or parsers for some file types.  However, what most tools do not provide is an easy means for analysts to describe the valuable artifacts that they've found, nor an easy means to communicate intelligence gathered through examination and research to other analysts.

Some of what I see happening includes analysts go to training and/or a conference, and hearing "experts" (don't get me wrong, many speakers are, in fact, experts in their field...) speak, and then return to their desks with...what?  Not long ago, I was giving a presentation and the subject of analysis of shellbag artifacts came up.  I asked how many of the analysts in the room did shellbag analysis and two raised their hands.  One of them stated that they had analyzed shellbag artifacts when they attended a SANS training course, but they hadn't done so since.  I then asked how many folks in the room conducted analysis where what the user did on the system was of primary interest in most of their exams, and almost everyone in the room raised their hands.  The only way I can explain the disparity between the two responses is that the tools used by most analysts provide a layer of abstraction to the data (acquired images) that they're viewing, and leave the identification of valuable (or even critical) artifacts and the overall analysis process up to the analyst.  A number of training courses provide information regarding analysis processes, but once analysts return from these courses, I'm not sure that there's a great deal of stimulus for them to incorporate what they just learned into what they do.  As such, I tend to believe that there's a great deal of extremely valuable intelligence either missed or lost within our community.

I'm beginning to believe more and more that tools that simply provide a layer of abstraction to the data viewed by analysts are becoming a thing of the past.  Or, maybe it's more accurate to say that they should become a thing of the past.  The analysis process needs to be facilitated more, and the sharing of information and intelligence between both the tools used, as well as the analysts using them, needs to become more part of our daily workflow.

Part of this belief may be because many of the tools available don't necessarily provide an easy means for analysts to share that process and intelligence.  What do I mean by that?  Take a look at some of the tools used by analysts today, and consider why those tools are used.  Now, think to yourself for a easy is it for one analyst using that tool to share any intelligence that they've found with another (any other) analyst?  Let's say that one analyst finds something of value during an exam, and it would behoove the entire team to have access to that artifact or intelligence.  Using the tool or framework available, how does the analyst then share the analysis or investigative processed used, as well as the artifact found or intelligence gleaned?  Does the framework being used provide a suitable means for doing so?

Analysts aren't sharing intelligence for two reasons...they don't know how to describe it, and even if they do, there's no easy means for doing so within the framework that they're using.  They can't easily share information and intelligence between the tools they're using, nor with other analysts, even those using the same tools.

For a great example of what I'm referring to, take a look at Volatility.  This started out as a project that was delivering something not available via any other means, and the folks that make up the team continue to do just that.  The framework provides much more than just a layer of abstraction that allows analysts to dig into a memory dump or hibernation file...the team also provides plugins that serve to illustrate not just what's possible to retrieve from a memory dump, but also what they've found valuable, and how others can find these artifacts via a repeatable process.  Another excellent resource is MHL et al's book, The Malware Analyst's Cookbook, which provides a great deal of process information via the format, as well as intel via the various 'recipes'.

I kind of look at it this way...when I was in high school, we read Chaucer's Canterbury Tales, and each year the books were passed down from the previous year.  If you were lucky, you'd get a copy with some of the humorous or ribald sections highlighted...but what wasn't passed down was the understanding of what was leading us to read these passages in the first place.  Sure, there's a lot of neat and interesting stuff that analysts see on a regular basis, but what we aren't good at is sharing the really valuable stuff and the intel with other analysts.  If that's something that would be of analyst being aware of what another analyst found...then as consumers we need to engage tool and process developers directly and consistently, let them know what our needs are, and start intelligently using those processes and tools that meet our needs.

Wednesday, November 21, 2012


Timeline Analysis
I recently taught another iteration of our Timeline Analysis course, and as is very often the case, I learned somethings as a result.

First, the idea (in my case, thanks goes to Corey Harrell and Brett Shavers) of adding categories to timelines in order to increase the value of the timeline, as well as to bring a new level of efficiency to the analysis, is a very good one.  I'll discuss categories a bit more later in this post.

Second (and thanks goes out to Cory Altheide for this one), I'm reminded that timeline analysis provides the examiner with context to the events being observed, as well as a relative confidence in the data.  We get context because we see more than just a file being modified...we see other events around that event that provide indications as to what led to the file being modified.  Also, we know that some data is easily mutable, so seeing other events that are perhaps less mutable occurring "near" the event in question gives us confidence that the data we're looking at is, in fact, accurate.

Another thing to consider is that timelines help us reduce complexity in our analysis.  If we understand the nature of the artifacts and events that we observe in a timeline, and understand what creates or modifies those artifacts, we begin to see what is important in the timeline itself.  There is no magic formula for creating timelines...we may have too little data in a timeline (i.e., just a file being modified) or we may have too much data.  Knowing what various artifacts mean or indicate allows us to separate the wheat from the chaff, or separate what is important from the background noise on systems.

Adding category information to timelines can do a great deal to make analysis ssssooooo much easier!  For example, when adding Prefetch file metadata to a timeline, identifying the time stamps as being related to "Program Execution" can do a great deal to make analysis easier, particularly when it's included along with other data that is in the same category.  Also, as of Vista (and particularly so with Windows 7 and 2008 R2), there have been an increase in the number of event logs, and many of the event IDs that we're familiar with from Windows XP have changed.  As such, being able to identify the category of an event source/ID pair, via a short descriptor, makes analysis quicker and easier.

One thing that is very evident to me is that many artifacts will have a primary, as well as a secondary (or even tertiary) category.  For example, let's take a look at shortcut/LNK files.  Shortcuts found in a user's Recents folder are created via a specific activity performed by the user...most often, by the user double-clicking a file via the shell.  As such, the primary category that a shortcut file will belong to is something akin to "File Access", as the user actually accessed the file.  While it may be difficult to keep the context of how the artifact is created/modified in your mind while scrolling through thousands of lines of data, it is oh so much easier to simply provide the category right there along with the data.

Now, take a look at what happens when a user double-clicks a file...that file is opened in a particular application, correct?  As such, a secondary category for shortcut files (found in the user's Recents folder) might be "Program Execution".  Now, the issue with this is that we would need to do some file association analysis to determine which application was used to open the file...we can't always assume that files ending in the ".txt" extension are going to be opened via Notepad. File association analysis is pretty easy to do, so it's well worth doing it.

Not all artifacts are created alike, even if they have the same file extension...that is to say, some artifacts may have to have categories based on their context or location.  Consider shortcut files on the user's desktop...many times, these are either specifically created by the user, or are placed there as the result of the user installing an application.  For those desktop shortcuts that point to applications, they do not so much refer to "File Access", as they do to "Application Installation", or something similar.  After all, when applications are installed and create a shortcut on the desktop, that shortcut very often contains the command line "app.exe %1", and doesn't point to a .docx or .txt file that the user accessed or opened. 

Adding categories to your timeline can bring a great deal of power to your fingertips, in addition to reducing the complexity and difficulty of finding the needle(s) in the hay stack...or stack of needles, as the case may be.  However, this addition to timeline analysis is even more powerful when it's done with some thought and consideration given to the actual artifacts themselves.  Our example of LNK files clearly shows that we cannot simply group all LNK files in one category.  The power and flexibility to include categories for artifacts based on any number of conditions is provided in the Forensic Scanner.

Sorry that I didn't come up with a witty title for this section of the post, but I wanted to include something here.  I caught up to SketchyMoose's blog recently and found this post that included a mention of RegRipper.

In the post, SM mentions a plugin named ''.  This is an interesting plugin that I created as a result of something Don Weber found during an exam when we were on the ...that the bad guy was hiding PE files (or portions thereof) in Registry values!  That was pretty cool, so I wrote a plugin.  See how that works?  Don found it, shared the information, and then a plugin was created that could be run during other exams.

SM correctly states that the plugin is looking for "MZ" in the binary data, and says that it's looking for it at the beginning of the value.  I know it says that in the comments at the top of the plugin file, but if you look at the code itself, you'll see that it runs a grep(), looking for 'MZ' anywhere in the data.  As you can see from the blog post, the plugin not only lists the path to the value, but also the length of the binary data being's not likely that you're going to find executable code in 32 bytes of data, so it's good visual check for deciding which values you want to zero in on.

SM goes on to point out the results of the plugin...which is very interesting.  Notice that in the output of that plugin, there's a little note that indicates what 'normal' should look like...this is a question I get a lot when I give presentations on Registry or Timeline Analysis...what is 'normal', or what about what I'm looking at jumps out at me as 'suspicious'.  With this plugin, I've provided a little note that tells the analyst, hey, anything other than just "userinit.exe" is gonna be suspicious!

USB Stuff
SM also references a Hak5 episode by Chris Gerling, Jr, that discusses mapping USB storage devices found on Windows systems.  I thought I'd reference that here, in order to say, "...there are more things in heaven and earth than are dreamt of in your philosophy, Horatio!"  Okay, so what does quoting the Bard have to do with anything?  In her discussion of her dissertation entitled, Pitfalls of Interpreting Forensic Artifacts in the Registry, Jacky Fox follows a similar process for identifying USB storage devices connected to a Windows system.  However, the currently accepted process for doing this USB device identification has some...shortcomings...that I'll be addressing.  Strictly speaking, the process works, and works very well.  In fact, if you follow all the steps, you'll even be able to identify indications of USB thumb drives that the user may have tried to obfuscate or delete.  However, this process does not identify all of the devices that are presented to the user as storage.

Please don't misunderstand me here...I'm not saying that either Chris or Jacky are wrong in the process that they use to identify USB storage devices.  Again, they both refer to using regularly accepted examination processes.  Chris refers to Windows Forensic Analysis 2/e, and Jacky has a lot of glowing and positive things to say about RegRipper in her dissertation (yes, I did read it...the whole thing...because that's how I roll!), and some of those resources are based on information that Rob Lee has developed and shared through SANS.  However, as time and research have progressed, new artifacts have been identified and need to be incorporated into our analysis processes.

I ran across this listing for Win32/Phorpiex on the MS MMPC blog, and it included something pretty interesting.  This malware includes a propagation mechanism when using removable storage devices.

While this propagation mechanism seems pretty interesting, it's not nearly as interesting as it could be, because (as pointed out in the write up) when the user clicks on the shortcut for what they think is a folder, they don't actually see the folder opening.  As such, someone might look for an update to this propagation mechanism in the near future, if one isn't already in the wild.

What's interesting to me is that there's no effort taken to look at the binary contents of the shortcut/LNK files to determine if there's anything odd or misleading about them.  For example, most of the currently used tools only parse the LinkInfo block of the LNK file...not all tools parse the shell item ID list that comes before the LinkInfo block.  MS has done a great job of documenting the binary specification for LNK files, but commercial tools haven't caught up.

In order to see where/how this is an issue, take a look at this CyanLab blog post.

Malware Infection Vectors
This blog post recently turned up on MMPC...I think that it's great because it illustrates how systems can be infected via drive-bys that exploit Java vulnerabilities.  However, I also think that blog posts like this aren't finishing the race, as it were...they're start, get most of the way down the track, and then stop...they stop before they show what this exploit looks like on a system.  Getting and sharing this information would serve two purposes...collect intelligence that they (MS) and others could use, and help get everyone else closer to conducting root cause analyses after an incident.  I think that the primary reason that RCAs aren't being conducted is that most folks think that it takes too long or is too difficult.  I'll admit...the further away from the actual incident that you detect a compromised or infected system, the harder it can be to determine the root cause or infection vector.  However, understanding the root cause of an incident, and incorporating it back into your security processes, can go a long way toward helping you allocate resources toward protecting your assets, systems, and infrastructure.

If you want to see what this stuff might look like on a system, check out Corey's jIIr blog posts that are labeled "exploits".  Corey does a great job of exploiting systems and illustrating what that looks like on a system.

Wednesday, November 07, 2012

PFIC2012 slides

Several folks at PFIC 2012 asked that I make my slides from the Windows 7 Forensic Analysis and Timeline Analysis presentations here they are.

I'll have to admit, I've become somewhat hesitant to post slides, not because I don't want to share the info, but because posting the slides from my presentations doesn't share the info...most of the information that is shared during a presentation isn't covered in the slides.

Wednesday, October 31, 2012

Shellbag Analysis, Revisited...Some Testing

I blogged previously on the topic of Shellbag Analysis, but I've found that in presenting on the topic and talking to others, there may be some misunderstanding of how these Registry artifacts may be helpful to an analyst.  With Jamie's recent post on the Shellbags plugin for Volatility, I thought it would be a good idea to revisit this information, as sometimes repeated exposure is the best way to start developing an understanding of something.  In addition, I wanted to do some testing in order to determine the nature of some of the metadata associated with shellbags.

In her post, Jamie states that the term "Shellbags" is commonly used within the community to indicate artifacts of user window preferences specific to Windows Explorer.  MS KB 813711 indicates that the artifacts are created when a user repositions or resizes an Explorer windows.

ShellItem Metadata
As Jamie illustrates in her blog post, many of the structures that make up the SHELLITEMS (within the Shellbags) contain embedded time stamps, in DOSDate format.  However, there's still some question as to what those values mean (even though the available documentation refers to them as MAC times for the resource in question) and how an analyst may make use of them during an examination.

Having some time available recently due to inclement weather, I thought I would conduct a couple of very simple tests in to begin to address these questions.

Testing Methodology
On a Windows 7 system, I performed a number of consecutive, atomic actions and recorded the system time (visible via the system clock) for when each action was performed.  The following table lists the actions I took, and the time (in local time format) at which each action occurred.

Action Time
Create a dir: mkdir d:\shellbag 12:54pm
Create a file in the dir: echo "..."  > d:\shellbag\test.txt 1:03pm
Create another dir: mkdir d:\shellbag\test 1:08pm
Create a file in the new dir: echo "..." > d:\shellbag\test\test.txt 1:16pm
Delete a file: del d:\shellbag\test.txt 1:24pm
Open D:\shellbag\test via Explorer, reposition/resize the window 1:29pm
Close the Explorer window opened in the previous step 1:38pm

The purpose of having some time pass between actions is so that they can be clearly differentiated in a timeline.

Once these steps were completed, I restarted the system, and once it came back up, I extracted the USRCLASS.DAT hive from the relevant user account into the D:\shellbag directory for analysis (at 1:42pm).  I purposely chose this directory in order to determine how actions external to the shellbags artifacts affect the overall data seen.

The following table lists the output from the RegRipper plugin for the directories in question (all times are in UTC format):

Directory MRU Time Modified Accessed Created
Desktop\My Computer\D:\shellbag 2012-10-29 17:29:25 2012-10-29 17:24:26 2012-10-29 17:24:26 2012-10-29 16:55:00
Desktop\My Computer\D:\shellbag\test 2012-10-29 17:29:29 2012-10-29 17:16:20 2012-10-29 17:16:20 2012-10-29 17:08:18

Let's walk through these results.  First, I should remind you that that MRU Time is populated from Registry key LastWrite times (FILETIME format, granularity of 100 ns) while the MAC times are embedded within the various shell items (used to reconstruct the paths) in DOSDate time format (granularity of 2 seconds).

First, we can see that the Created dates for both folders correspond approximately to when the folders were actually created.  We can also see that the same thing is true for the Modified dates.  Going back to the live system and typing "dir /tw d:\shell*" shows me that the last modification time for the directory is 1:42pm (local time), which corresponds to changes made to that directory after the USRCLASS.DAT hive file was extracted.

Next, we see that MRU Time values correspond approximately to when the D:\shellbag\test folder was opened and then resized/repositioned via the Explorer shell, and not to when the Explorer window was actually closed.

Based on this limited test, it would appear that the DOSDate time stamps embedded in the shell items for the folders correspond to the MAC times of that folder, within the file system, at the time that the shell items were created.  In order to test this, I deleted the d:\shellbag\test\test.txt file at 2:14pm, local time, and then extracted a copy of the USRCLASS.DAT and parsed it the same way I had before...and saw no changes in the Modified times listed in the previous table.

In order to test this just a bit further, I opened Windows Explorer, navigated to the D:\shellbag folder, and repositioned/resized the window at 2:21pm (local time), waited 2 minutes, and closed the window.  I extracted and parsed the USRCLASS.DAT hive again, and this time, the MRU Time for the D:\shellbag folder had changed to 18:21:48 (UTC format).  Interestingly, that was the only time that had changed...the Modified time for the D:\shellbag\test folder remained the same, even though I had deleted the test.txt file from that directory at 2:14pm local time ("dir /tw d:\shellbag\te*" shows me that the last written time for that folder is, indeed, 2:14pm).

Further testing is clearly required; however, it would appear that based on this initial test, we can draw the following conclusions with respect to the shellbag artifacts on Windows 7:

1.  The embedded DOSDate time stamps appear to correspond to the MAC times of the resource/folder at the time that the shell item was created.  If the particular resource/folder was no longer present within the active file system, an analyst could use the Created date for that resource in a timeline.

2.  Further testing needs to be performed in order to determine the relative value of the Modified date, particularly given that events external to the Windows Explorer shell (i.e., creating/deleting files and subfolders after the shell items have been created) may have limited effect on the embedded dates.

3.  The MRU Time appears to correspond to when the folder was resized or repositioned.  Analysts should keep in mind that (a) there are a number of ways to access a folder that do not require the user to reposition or resize the window, and (b) the MRU Time is a Registry key LastWrite time that only applies to one folder within the key...the Most Recently Used folder, or the one listed first in the MRUListEx value.

I hope that folks find this information useful.  I also hope that others out there will look at this information, validate it through their own testing, and even use it as a starting point for their own research.

Monday, October 29, 2012


Being socked in by the weather, I thought it would be a good time to throw a couple of things out there...

Mounting an Image order to test or make use of the Forensic Scanner, you first need to have an image.  If you don't have an image available, you can download sample images from a number of locations online.  Or you can image your own system, or you can use virtual machine files (FTK Imager will mount a .vmdk file with no issues).  However, the Forensic Scanner was not intended to be run against your local, live system.

Once you have an image to work with, you need to mount it as a volume in order to run the Forensic Scanner against it.  If you have a raw/dd image, a .vmdk or .vhd file, or a .E0x file, FTK Imager will allow you to mount any of these in read-only format.

If you have a raw/dd format image file, you can use vhdtool to add a footer to the file, and then use the Disk Manager to attach the VHD file read-only.  If you use this method, or if you mount your image file as VMWare virtual machine, you will also be able to list and mount available VSCs from within the image, and you can run the Scanner against each of those.

If you have any version of F-Response, you can mount a remote system as a volume, and run the Forensic Scanner against it.  Don't take my word for it...see what Matt, the founder of F-Response, says about that!

If you have issues with accessing the contents of the mounted image...Ken Johnson recently tried to access a mounted image of a Windows 8 system from a Windows 7 analysis may run into issues with permissions.  After all, you're not accessing the  image as a logical, you might try mounting the image as "File System/Read-Only", rather than the default "Block Device/Read-Only", or you may want to run the Scanner using something like RunAsSystem in order to elevate your privileges.

If your circumstances require it, you can even use FTK Imager (FTK Imager Lite v3.x is now available and supports image mounting) to access an acquired image, and then use the export function to export copies of all of the folders and files from the image to a folder on your analysis system, or on a USB external drive, and then run the scanner against that target.

Okay, but what about stuff other than Windows as your target?  Say that you have an iDevice (or an image acquired from one...)...the Forensic Scanner can be updated (it's not part of the current download, folks) to work with these images, courtesy of HFSExplorerCaveat: I haven't tested this yet, but from the very beginning, the Forensic Scanner was designed to be extensible in this manner.

Again, if you opt to run the Forensic Scanner against your local drive (by typing "C:\Windows\system32" into the tool), that's fine.  However, I can tell you it's not going to work, so please don't email me telling me that it didn't work.  ;-)

Forensic Scanner Links
Forensic Scanner Links - links where the Forensic Scanner is mentioned:
F-Response Blog: F-Response and the ASI Forensic Scanner
Grand Stream Dreams: Piles o' Linkage
SANS Forensics Blog: MiniFlame, Open Source Forensics Edition

Apparently, Kiran Vangaveti likes to post stuff that other people write...oh, well, I guess that imitation really is the sincerest form of flattery!  ;-)

The good folks over at RSA have had some interesting posts of late to their "Speaking of Security" blog, and the most recent one by Branden Williams is no exceptionIn the post, Branden mentions "observables", as well as Locard's Exchange Principle...but what isn't explicitly stated is the power of correlating various events in order to develop situational awareness and context, something that we can do with timeline analysis.

An example of this might be a failed login attempt or a file modification.  In and of themselves, these individual events tell us something, but very little.  If we compile a timeline using the data sources that we have available, we can begin to see much more with regards to that individual event, and we go from, "...well, it might/could be..." to "...this is what happened."

SANS Forensic Summit 2013
The next SANS #DFIR Summit is scheduled for July 2013 (in Austin, TX) and the call for speakers is now open.

Prefetch Analysis
Adam posted recently regarding Prefetch file names and UNC paths, and that reminded me of my previous posts regarding Prefetch Analysis.  The code I currently use for parsing Prefetch files includes parsing of paths that include "temp" anywhere in the path (via grep()), and provides those paths separately at the end of the output (if found).  Parsing of UNC paths (any path that begins with two back slashes, or begins with "\Device") can also be included in that code.  The idea is to let the computer extract and present those items that might be of particular interest, so that the analyst doesn't have to dig through multiple lines of code.

Friday, October 19, 2012

DOSDate Time Stamps in Shell Items

Since this past spring, the term "shellbags" has been heard more and more often.  Searching for "shellbag analysis" via Google reveals a number of very informative links.  I'm going to gloss over the specifics of these links, but my doing so in no way minimizes any of the research, analysis and documentation by those who have contributed to the understanding of these Windows artifacts.

What I want to get to directly is the underlying data structures associated with the shellbags artifacts, specifically, the shell items and shell item ID lists, structures that Joachim Metz and others such as Kevin Moore have worked to identify and define.  Again, mentioning the contributions made by these two individuals is in no way intended to take away from work performed by others in this area.

Shell items and shell item ID lists are used in a number of artifacts and data structures on Windows systems.  Perhaps one of the most well known of those structures is the Windows shortcut/LNK files; you can see from the MS specification regarding the file format where the shell items exist within the structure.  A number of Registry keys also use these data structures, including (but not limited to) Shellbags, ComDlg32, and MenuOrder.

 Several of the data structures that make up the shell item ID lists include embedded data, to include time stamps.  In many cases (albeit not all), these embedded time stamps are DOSDate format, which is a 32-bit time stamp with a granularity of two seconds.

Now, since a lot of the analysis that we do is often based heavily upon not simply that an event or action occurred, but when it occurred, often having additional sources of time stamped data can be extremely valuable to an analyst.  However, there is much more to timeline analysis than simply having a time stamp from an artifact...the analyst must understand the context of that artifact with respect to the time stamp in question.

The question I would like to pose to the community is...what is the value of the embedded DOSDate time stamps within the shell items?

Let's first consider shellbags.  The keys that store these artifacts are mentioned in MS KB 813711, so we have an idea of how these artifacts are created.  In short, it appears that the shellbags artifacts are created (or modified) when a user accesses a folder via the Windows Explorer shell, and then repositions or resizes the window that appears on the desktop.  So let's say that I open Windows Explorer, navigate to the "C:\Windows\Temp" directory, and resize the window.  I would then expect to find indications of the path in the shellbags artifacts.  At this point, we would expect that the time stamps embedded within the shellbags artifacts (and keep in mind, more testing is required in order to verify this...) refer to the MAC times from the "Windows" and "Temp" folders at the time that the artifact was created.

If we can agree on this, even for the moment, can we then also agree that other activities outside of those that create or modify the shellbags artifacts will also act upon the MAC times for those folders?  For example, adding or deleting files or subfolders, or any other action that causes those folders to be modified will cause the last modified ("M") date to be...well...modified.

On Vista systems and above, the updating of last access times for file system objects has been disabled by default.  Even if it weren't, other actions and events not associated with the shellbags artifacts (AV scans, user activity, normal system activity, etc.) would also cause these times to be modified.

The same thing could be said for the ComDlg32 artifacts.  On Vista systems and above, several of the subkeys beneath the ComDlg32 key in the user's NTUSER.DAT hive contain values that are consistent with shell items, and are parsed in a similar manner.  The data structures that describe the files and folders in these shell items contain embedded DOSDate time stamps, but as with the shellbags, these artifacts can be affected by other actions and events that occur outside of the scope of the ComDlg32 key.

Given this, I would like to reiterate my question: what is the value of the "M" and "A" DOSDate time stamps embedded within shell item data structures?  The "C" time is defined as the creation date, and even with a 2 second granularity, I can see how this time stamp can be of value, particularly if (a) the described resource no longer exists on the system, or (b) the described resource is on remote or removable storage media.  However, I would think that adding the "M" and "A" times for a resource to a timeline could potentially add considerable noise and confusion, particularly if the nature and context of the information is not completely understood.  In fact, simply having so many artifacts that are not easily understood can have a significant detrimental impact on analysis. 

What are your thoughts? 

Thursday, October 18, 2012

Motivations behind the Forensic Scanner

At the recent OSDFC, I presented on and announced the release of the Forensic Scanner.

Since then, I've been asked to talk about things like the motivations behind the development of the Scanner, and what need I was trying to fill by creating it.  At the conference, shortly after my presentation, it was suggested to me that I should talk about what's different about the Scanner, how it's different from the other available frameworks, such as Autopsy and DFF.

One of the reasons I wrote the Scanner (and this applies to RegRipper, as well) is that over the years of performing analysis, I've found that as I've maintained a checklist of things to "look for" during an exam, that checklist has grown, become a spreadsheet, and continued to grow...but no matter how well-organized the spreadsheet is, it's still a spreadsheet and doesn't help me perform those checks faster.  If I have preconditions (something else that needs to be checked first) listed in the spreadsheet for a specific check, I have to go back into the image, do the initial check, and then see if the conditions have been met to perform the check I have listed in that row in the spreadsheet.

One good example of this is the ACMru Registry key in Windows XP.  This key can exist within the user's NTUSER.DAT hive, and illustrates what the user searched for on the system.  With Windows Vista, indications of the user's searches were moved out of the Registry, and with Windows 7, they were added back into the Registry, into the WordWheelQuery key.  So, if you wanted to see what a user had searched for, you had to first check the version of the Windows operating system (the "precondition"), and then refer to the checklist and follow the appropriate path to the Registry key for that particular platform.

All of this can be hard to remember over time, and this is true not only for someone who performs a good deal of research into the Windows Registry.  Imagine what this would be like for someone who doesn't spend a great deal of time analyzing Windows systems in general.  The fact is that even with an extensive checklist or spreadsheet, this still requires a good deal of time and effort to maintain, as well as to engage.  After all, you can't just lay a spreadsheet over your acquired image and have all the answers pop up...yet, anyway.  An analyst has to actually sit down with the checklist and the acquired image and do something.

The plugins used by the Forensic Scanner, like those used by RegRipper, can be as simple as you want them to be.  For example, with RegRipper, you may not want to look for specific values in the the Run key, because malware names and paths can change.  As such, you may want to dump all of the values in order to present them to the analyst.  You can also perform a modicum of analysis by grepping for terms such as "temp", to find any executables being launched from a temp directory (i.e., "Local Settings\Temp", or "Temporary Internet Files", etc.).  The plugins used by the Forensic Scanner can do something very similar.  For example, remember the ntshrui.dll issue?  You can write a plugin that specifically checks for a file with that name in the Windows directory, or you can write a plugin that dumps a list of all of the DLLs in the Windows directory, and either apply a whitelist ("these are the known good files"), or a blacklist.

Note: The above paragraph is intended as an example, and does not indicate that current RegRipper plugins will work with the Forensic Scanner.  They won't.  A small amount of work is necessary to get the current RegRipper plugins to work with the Forensic Scanner.

Here's another good example...ever seen these blog posts?  Do they make sense?  How would you check for this?  Well, I'd start with a plugin that parses the shell item ID lists of all of the LNK files on the user Desktop, and compares those elements to the ASCII in the LinkInfo block, if there is one.  You could then extend this to the Recents folder, and because it's all automated, it will be done the same way, every time, no matter how many user profiles are on the system.

A final example...Corey Harrell reached to me a bit ago, asking me to create a plugin for a couple of Registry keys (discussed here, in an MS KB article).  I did some looking around and found this valuable blog post on the topic of those keys.  The Order values beneath the subkeys in question contain shell item ID lists, much like the values found in the ShellBags, as well as ComDlg32 entries.  The linked blog post explains that the Order values often contain references to items the user explicitly selected, even after the application or resource has been deleted.  As such, why not include this in every scan you run...who knows when something of critical value to your examination is going to pop-up?  Most of us are aware that the one time we don't check these Registry keys will be the time that critical information is found to reside within the keys.  So...write the plugin once, and run it with every scan, automatically; if you don't need it, ignore it.  However, if you do need the information, it's right there.

One of the motivations for this sort of automation is to bridge a critical gap and get more organizations to perform root cause analysis, so that they can develop the necessary intelligence they need to properly defend themselves.  Right now, many organizations will react to an apparent compromise or infection by wiping the system in question, and trying to address all of the vulnerabilities that are out there..that is, if they do anything at all.  However, if they performed a root cause analysis, and determined the initial vector of compromise or infection, they could provide an intelligence-driven defense, reducing and optimizing their resources. One of the reasons for this sort of reaction may be performing a root cause analysis takes too long...the Forensic Scanner does not obviate analysis; rather, it is intended to get you there faster.

I understand that the idea of the Forensic Scanner is outside what most would consider "normal" thinking within the community.  This is also true for RegRipper, as well as Timeline Analysis.  The idea is not to obviate the need for analysis...rather, it's to get analysts to the actual analysis faster.  After all, tools like RegRipper and now the Forensic Scanner will extract in seconds what currently takes an analyst hours or even days to do.

NOTE: In order to run the Scanner, you'll want to either disable UAC on your Windows 7 analysis system, or right-click on the program icon and choose "Run as administrator". 

Finally, it appears that the Forensic Scanner is already making it's rounds at least one conference.  Also, the Forensic Scanner was designed to work with F-Response, and Matt Shannon has a blog post indicating that F-Response + Forensic Scanner = Awesome!

Wednesday, October 03, 2012

Forensic Scanner

I've posted regarding my thoughts on a Forensic Scanner before, and just today, I gave a short presentation at the #OSDFC conference on the subject.

Rather than describing the scanner, this time around, I released it.  The archive contains the Perl source code, a 'compiled' executable that you can run on Windows without installing Perl, a directory of plugins, and PDF document describing how to use the tool.  I've also started populating the wiki with information about the tool, and how it's used.

Some of the feedback I received on the presentation was pretty positive.  I did chat with Simson Garfinkel for a few minutes after the presentation, and he had some interesting thoughts to share.  The second was on making the Forensic Scanner thread-safe, so it can be multi-threaded and use multiple cores.  That might make it on to the "To-Do" list, but it was Simson's first thought that I wanted to address.  He suggested that at a conference were the theme seemed to revolve around analysis frameworks, I should point out the differences between the other frameworks and what I was presenting on, so I wanted to take a moment to do that.

Brian presented on Autopsy 3.0, and along with another presentation in the morning, discussed some of the features of the framework.  There was the discussion of pipelines, and having modules to perform specific functions, etc.  It's an excellent framework that has the capability of performing functions such as parsing Registry hives (utilizing RegRipper), carving files from unallocated space, etc.  For more details, please see the web site.  

I should note that there are other open source frameworks available, as well, such as DFF.

The Forensic Scanner is...different.  Neither better, nor worse...because it addresses a different problem.  For example, you wouldn't use the Forensic Scanner to run a keyword search or carve unallocated space.  The scanner is intended for quickly automating repetitive tasks of data collection, with some ability to either point the analyst in a particular direction, or perform a modicum of analysis along with the data presentation (depending upon how much effort you want to put into writing the plugins).  So, rather than providing an open framework which an analyst can use to perform various analysis functions, the Scanner allows the analyst to perform discrete, repetitive tasks.

The idea behind the scanner is this...there're things we do all the time when we first initiate our analysis.  One is to collect simple information from the's a Windows system, which version of Windows is it, it's time zone settings, is it 32- or 64-bit, etc.  We collect this information because it can significantly impact our analysis.  However, keeping track of all of these things can be difficult.  For example, if you're looking at an image acquired from a Windows system and don't see Prefetch files, what's your first thought?  Do you check the version of Windows you're examining?  Do you check the Registry values that apply to and control the system's prefetching capabilities?  I've talked with examiners who's first thought is that the user must have deleted the Prefetch files...but how do you know?

Rather than maintaining extensive checklists of all of these artifacts, why not simply write a plugin to collect what data it is that you want to collect, and possibly add a modicum of analysis into that plugin?  One analyst writes the plugin, shares it, and anyone with that plugin will have access to the functionality without having to have had the same experiences as the analyst.  You share it with all of your analysts, and they all have the capability at their fingertips.  Most analysts recognize the value of the Prefetch files, but some may not work with Windows systems all of the time, and may not stay up on the "latest and greatest" in analysis techniques that can be applied to those files.  So, let's say that instead of dumping all of the module paths embedded in Prefetch files, you add some logic to search for .exe files, .dat files, and any file that includes "temp" in the path, and display that information?  Or, why not create whitelists of modules over time, and have the plugin show you all modules not in that whitelist?

Something that I and others have found useful is that, instead of forcing the analyst to use "profiles", as with the current version of RegRipper, the Forensic Scanner runs plugins automatically based on OS type, class, and then organizes the plugin results by category.  What this means is that for the system class of plugins, all of the plugins that pertain to "Program Execution" will be grouped together; this holds true for other plugin categories, as well.  This way, you don't have to go searching around for the information in which you're interested.

As I stated more than once in the presentation, the Scanner is not intended to replace analysis; rather, it's intended to get you to the point of performing analysis much sooner.  For example, I illustrated a plugin that parses a user's IE index.dat file.  Under normal circumstances when performing analysis, you'd have to determine which version of Windows you were examining, determine the path to the particular index.dat file that you're interested in, and then extract it and parse it.  The plugin is capable of doing all of the test case, all of the plugins I ran against a mounted volume completed in under 2 seconds...that's scans of the system, as well as both of the selected user profiles.

So...please feel free to try the Scanner.  If you have any questions, you know where to reach me.  Just know that this is a work-in-progress, with room for growth.

Matt Presser identified an issue that the Scanner has with identifying user profiles that contain a dot.  I fixed the issue and will be releasing an update once I make a couple of minor updates to other parts of the code.

Tuesday, September 18, 2012

Network Artifacts found in the Registry

My Twitter account lit up with the "#HTCIACON" hash tag this passed Monday morning, apparently due to the HTCIA Conference.  I was seeing a lot of tweets from HBGary, etc., so I went by the conference web page and took a look at the agenda.  As you can imagine, anything with the word "Registry" in it catches my eye immediately, and I saw that there was a lab that had to do with "network artifacts in the Registry".  I took a look at the abstract for the lab, and it seemed to focus on the NetworkList key in the Vista+ Registry...that's great for showing wired and wireless networks that a system has connected to, as well as providing some useful information that can be used in WiFi geolocation (discussed here, updated tool named "" is here).

This is a great start, but what about after the system is connected to particular networking media?  What other "network artifacts" can be found in the Registry? That's where we can dig into additional sources of information, specifically other Registry keys and values, to look for those additional network artifacts. 

Something else to think about is, what if some of the artifacts that you're pursuing have been deleted?  Some tools (CCleaner, etc.) will "erase" lists of known artifacts.  Some tools, such as USB Oblivion, are targeted to more specific artifacts.  As such, knowing more about the artifacts or artifact categories that you're interested in will help you determine (a) if some sort of "cleaning" has likely occurred, and (b) provide you with other indicators that you might be able to pursue.

This information can be useful in cases involving violations of acceptable use policies within organizations; however, they are not specifically restricted to such cases.  I've used these artifacts in a number of intrusion cases, particularly where the compromised system was accessed remotely via RDP.

Here are some of the RegRipper plugins you can use to collect some other network artifacts, primarily from the user's hives: - The plugin gives me what was described in the abstract for the conference lab I mentioned; network profile, first/last connection date (in SYSTEMTIME format, based on the system's localtime), gateway MAC address, and the network type (wired, wireless, broadband).  The TLN version of this plugin will allow you to incorporate this information into your timeline. - As I mentioned previously, shellbags can provide a great deal of information regarding access to network resources; you might not only find indications of access to UNC paths, but also the use of Windows Explorer to access FTP resources (my publisher used to have me do this in order to transfer chapters...).  I should note that I've found entries such as these during exams.  If you do find information regarding access to FTP in the user's shellbags, you might also want to check out the Software\Microsoft\FTP\Accounts subkeys within the user hive, as well...not only will you find the host connected to, as well as the username used to access the site (if successful). - If the user uses the command line FTP utility that is native to Windows (ftp.exe), rather than Windows Explorer, to access an FTP site, you may find a reference to that executable in the user's MUICache key (in Windows 7, located in the USRCLASS.DAT hive).  Of course, if another GUI FTP client was used, you might expect to find information about that usage via the plugin. - This plugin parses the users "Map Network Drive MRU" key data, showing the network drives that the user has mapped via the Map Network Drive Wizard. - A user may decide to click the Start button, and then simply type a UNC path into the "Run:" box; this information will be available via the plugin. - The TypedPaths key maintains a history of the locations typed into the Windows Explorer Address Bar; similar to the RunMRU key, a user can type UNC paths to network resources into this location. - If the user accesses other systems via RDP, you will find references to those connections in the Software\Microsoft\Terminal Services Client subkeys, specifically beneath the Default and Servers subkeys.  Of course, if you do find indications of off-system communications via this key, and you're analyzing a Windows 7 system, be sure to include the user Jump Lists in your analysis, as well.

There are also application-specific Registry entries to consider, as well.  For example, the plugin is in version 20080325 at the moment, which means that it was originally written almost 4 1/2 years ago. 

Are there any other network artifacts within the Registry that you might be interested in that aren't covered here?  If so, let me know, or comment here.

Monday, September 17, 2012


Autopsy v3 is in beta...check it out here.  There are lots of screenshots, so if you're looking for a free, open source alternative to commercial analysis frameworks, take a look.  Also, consider DFF, another useful framework (discussed in DFwOST) to help get you started in analysis, or to use side-by-side with your other tools in order to validate what you're seeing.  If you're in Chantilly, VA, during the first week of Oct, be sure to sign up for OSDFC; it looks like Brian's going to be talking about Autopsy there.

Speaking of open source tools, Volatility has really taken off, with the establishment of the Volatility Labs blog, as well as month of Volatility Plugins (MoVP).  In the month of September alone, there have been half a dozen or so posts.  I really like how the descriptions of the plugins are laid out, as each provides several sections, including the Effects on Forensics, a description of what the plugin helps reveal.  What I really like about this is that too often, in our age of "social"  media, too many times, simply linking to something, or clicking "Like" or "Favorite" or retweeting something means that we miss out on the valuable insight that could easily be provided.  Good on the folks from Volatility for contributing to the growth of the community in more ways than simply providing a tool to make memory collection and analysis achievable, for a variety of platforms..  To see more of what these folks bring to the table, be sure to check out the Open Memory Forensics Workshop prior to OSDFC, albeit a short distance from the OSDFC venue; OMFW is a half-day conference on 2 Oct, in Chantilly, VA, with presentations by some of the big names (and big heads!!) in memory forensics.

If you're not sure about using Volatility for analysis, check out the SemperSecurus post regarding using Volatility to analyze the Cridex malware.  In the post, Andre refers to a number of the useful Volatility commands for extracting information from a memory dump.

If you're using Volatility, or want to, be sure to check out MemGator, for memory sample automated extraction and HTML reporting.  According the write-up, MemGator is also capable of extracting TrueCrypt keys.  If you do have access to a memory dump from a supported OS, this is a great tool to run through that dump through initial processing, before you move on to more targeted analysis.

Christian Buia has an interesting EMC blog post where he discusses threat actor's lateral movement within an infrastructure via the Task Scheduler.

The latest rendition of the Perl code that I wrote, which Christian mentions in the post, can be found here, and Jamie's Python tool is discussed here.

Christian's blog post is very valuable to the community, as there have been a number of times (such as at the DC3 conference last January) where presenters have stated that "..lateral movement was used...", but don't go beyond that; I heard several attendees lament this fact, because without knowing what this "looks like", how would they find it within their infrastructure.  Christian gives an example of what this can "look like" within the infrastructure.  Some things you might find via host-based analysis would be indications of the use of at.exe on the primary node (Prefetch file, etc.), and artifacts of a Scheduled Task having been run on the secondary node (Windows Event Log entries, Scheduled Task logs, etc.).  Now, this is not the only means of lateral movement that can be used; there is also psexec and it's variants, as well.

Sploited has some updates to TLN tools, which is pretty cool.  So far, three tools have been posted on the Google Code site for the project.  I haven't tried these tools yet, but I am looking for an opportunity to do so soon.  This is a great example of how someone in our industry sees a need and decides to step up and fill it...too many times, I think that many of us are hesitant to do so much as express a need...I'm not saying sit down and learn to program, I'm saying simply express a need..." would be useful to have X."  After all, most of the folks in the community, from the Volatility crew to the person who writes a DOS batch file to tie RegRipper rip.exe commands together all start that way.

Windows 8 is more than on it's way out...check out Claus's Windows 8 Linkage post to see some of the new stuff available.

Threat Intel
I found this very interesting Trend Micro write-up on the Tinba banking trojan, said to be the "smallest", as it reportedly weighs in at 20KB.  Write-ups like these can often provide a great deal of threat intelligence, but they can also sometimes fall a bit short with respect the amount of intelligence that can be derived from them and used within your own infrastructure.  I don't think that this is necessarily an intentional omission; rather, I think that in most cases, it's due to the perspective of the organization (in this case, an AV vendor), as well as the analyst(s) performing the actual analysis (malware RE folks).  What I mean by that is that if you're a reverse engineer who's really good at using a debugger and analyzing memory, your analysis is going to be slanted in that direction.  Very often what I try to do is read between the lines and see what isn't said, and what host-based artifacts should be there that aren't discussed, so that the information provided can be used as part of a root cause analysis.

A couple of things I found interesting within the write-up:

This malware apparently uses the user's Run key for persistence.  Many sites, including Microsoft themselves, have stated this persistence mechanism allows the malware to remain persistent following a system boot...however, it is more correct to say that it allows the malware to start automatically the next time the user logs in to the system.  Yes, the malware will run after a reboot, but only after that user logs in again.

Note: the write-up states that the malware creates a Registry "key" for itself; this is incorrect - it creates a Registry value beneath the Run key.  This may not be a huge distinction to many, but when it comes to things like incorporating Registry data into timeline analysis, it will make a significant difference.  Registry keys include embedded time stamps within their structure, referred to as the key "LastWrite" time; Registry values do not have this element in their structure.

I did not find any mention within the write-up as to which platform was used to analyze the malware; we can assume, based on a couple of the screen captures, that it was likely Windows XP, but there's no indication as to whether the platform is 32- or 64-bit.  This is important to consider, because if the malware is 32-bit and infects a 64-bit system, the location of the persistence mechanism will be slightly different.

The malware also apparently modifies another Registry value that controls Internet Explorer; specifically, within the "HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\3" key, the value "1609" is modified to 0.  There is other malware out there that makes similar modifications, and MS is kind enough to provide a KB article that describes what the various values beneath these keys refer to. 

Something else that the malware does is modify the user.js Firefox file within the user's profile; as there doesn't seem to be any indication of time stamp manipulation, this should show up as an "M..." entry for that file in your timeline.  The modification that the malware makes apparently disables anti-phishing warnings in Firefox.

While there is some discussion of the malware communications mechanism, it mostly centers on the fact that comms are encrypted.  Earlier in the write-up, the report states that the malware primarily uses 4 DLLs, one of which is ws2_32.dll.  This may provide it's off-system communications capabilities, and for host-based analysts, it can tell us something else.  For example, if this is the only DLL utilized for off-system communications, and others such as wininet.dll are not used, then this would tell us that we should not expect to see artifacts of the communications in the user's index.dat file, and we should probably consider spelunking into the pagefile for artifacts of communications.

Thursday, September 06, 2012

32-bit EXEs on a 64-bit System

Anyone who's done digital analysis of Windows systems knows that they can be pretty complex beasts to analyze.  To add a little "fun" to the mix, the location of artifacts on a 64-bit Windows system will depend on whether the programs used are 32- or 64-bit programs.


Now, what does all this mean?  Let's say you have a copy of a malware sample, and you submit it to VirusTotal, or you already know what the malware is identified and known as to some of the various AV vendors.  When you read the AV vendor write-ups on the malware, you'll notice that they very rarely identify the analysis platform used; so, if the malware uses the Run key for persistence, you might open the Software hive in a viewer and navigate to the Run key, not find any unusual entries, and then simply assume that the malware either didn't execute, or failed to completely install and run.

You should be aware that 64-bit Windows system perform redirection when it comes to 32-bit applications.  What this means is that if you check the PE header of the malware file and find that it was compiled for 32-bit platforms, and the system you're analyzing is a 64-bit platform, you're going to need to look in a few additional (maybe I should say "other", instead) areas for your artifacts.

Registry redirection primarily affects the Software and NTUSER.DAT hives; there's (apparently) no "WOW6432Node" subkey in the System hive (I haven't seen one yet).  As such, if malware usually uses the Run key, then you should check the Run keys in both the HLKM\Software\Microsoft\Windows\CurrentVersion and the HLKM\Software\Wow6432Node\Microsoft\Windows\CurrentVersion paths.

Something to be aware of is that none of the RegRipper plugins available in the current archive that check the Run keys in either the Software or NTUSER.DAT hives account for redirection.  I have a number of plugins in my own repository that do take this into account, and have been responsible for some pretty cool/significant finds.

Now, most times, within the file system, all we usually see with respect to redirection is that when we install 32-bit applications on a 64-bit system, we'll see a C:\Program Files directory as well as a C:\Program Files (x86) directory.  However, due to file system redirection, 32-bit malware that, for example, attempts to write a file to the  C:\Windows\config\systemprofile \AppData\Local folder (on a Windows 7 or Win2008R2 system), on a 64-bit system will actually write that file to the to C:\Windows\SysWOW64\config\systemprofile\AppData\Local folder.  Use of environment variables such as %Temp% will have a similar effect.

Again, I have yet to see any Registry redirection that applies to the System hive.  I have seen 32-bit malware installed on 64-bit Windows systems as a service, and the services entries went right into the ControlSet00n\services\ subkeys.

Note: The current version of the plugin includes some simple checks (via grep()) for potentially suspicious paths, such as those that contain the term "temp".  Based on some recent findings, I'm going to go back to that and add a check for SysWOW64...and I will likely add that check to other plugins, as well.

So What?
So, why should anyone care about any of this at all?  Well, let's say that you're doing some analysis of a system, and you think that there may be malware on the system, malware for which several AV vendor write-ups state that it uses the Run key for persistence.  Okay, so we run RegRipper or open the hive in a viewer and navigate to that key...but we don't see a reference to any unusual files.  At this point, what do we do?  In some cases, many of us might check that item off of our checklist, and move on to the next step.  However, did we really complete that check?  After all, AV vendor write-ups are notorious for not providing complete information...I have yet to see a write-up that states, " is the PE header info for this malware, and we performed dynamic analysis by running the malware in a Windows XP SP3 (32-bit) VirtualBox VM..", and to be honest, I don't think that I've seen that because to the AV vendors, none of that is important.  But to an incident responder or a digital forensic analyst, it can be very important.

A couple of months ago, I was looking at a Software hive from a system that had been compromised, and I found some interesting artifacts in the Microsoft\Tracing key.  Now, this was a 64-bit Windows system I was analyzing, and I knew that at least one of the programs that had been installed on the system was 32-bit, so I decided to check the WOW6432Node\Microsoft\Tracing key, and not only did I find the subkeys that I thought would be there, but I also found references to other 32-bit programs, as well. 

So, when performing analysis or asking questions in order to further your analysis, it is important to realize that not only is it very important to be aware of the version of Windows that you are analyzing, but it's also important to be aware of whether the version is 32- or 64-bit, as this can have a significant impact on where you look for artifacts, in both the file system as well as the Registry.

This also applies to when we're sharing information within the community; for example, after having read this blog post, it should be pretty clear that this write-up of ADrive on involved a 32-bit app run on a 64-bit system.  However, if you weren't aware of this, would you look for the \Wow6432Node\Microsoft\Tracing\ADrive Desktop_RASAPI32 key, not find "Wow6432Node" in the Software hive, and then just figure that the application had not been run?

Registry Redirection
File System Redirector