Friday, December 28, 2012

Malware Detection

Corey recently posted to his blog regarding his exercise of infecting a system with ZeroAccess.  In his post, Corey provides a great example of a very valuable malware artifact, as well as an investigative process, that can lead to locating malware that may be missed by more conventional means. 


This post isn't meant to take anything away from Corey's exceptional work; rather, my intention is to show another perspective of the data, sort of like "The Secret PoliceMan's Other Ball".  Corey's always done a fantastic job of performing research and presenting his findings, and it is not my intention to detract from his work at all.  Instead, I would like to present another perspective, utilizing Corey's work and blog post as a basis, and as a stepping stone.

The ZA sample that Corey looked at was a bit different from what James Wyke of SophosLabs wrote about, but there were enough commonalities that some artifacts could be used to create an IOC or plugin for detecting the presence of this bit of malware, even if AV didn't detect it.  Specifically, the file "services.exe" was infected, an EA attribute was added to the file record in the MFT, and a Registry modification occurred in order to create a persistence mechanism for the malware.  Looking at these commonalities is similar to looking at the commonalities between various versions of the Conficker family, which created a randomly-named service for persistence.

From the Registry hives from Corey's test, I was able to create and test a RegRipper plugin that do a pretty good job of filtering through the Classes/CLSID subkey (from the Software hive) and locating anomalies. In it's original form, the MFT parser that I wrote finds the EA attribute, but doesn't specifically flag on it, and it can't extract the shell code and the malware PE file (because the data is non-resident).  However, there were a couple of interesting things I got from parsing the MFT...

If you refer to Corey's post, take a look at the section regarding the MFT record for the infected services.exe file.  If you look at the time stamps and compare those from the $STANDARD_INFORMATION attribute to those of the $FILE_NAME attribute that Corey posted, you'll see an excellent example of file system tunneling.  I've talked about this in a number of my presentations, but here's a pretty cool to see an actual example of it.  I know that this isn't really "outside the lab", per se, but still, it's pretty cool to this functionality as a result of a sample of malware, rather than a contrived exercise.  Hopefully, this example will go a long way toward helping analysts understand what they're seeing in the time stamps.

Corey also illustrated an excellent use of timeline analysis to locate other files that were created or modified around the same time that the services.exe file was infected.  What the timeline doesn't show clearly is that the time stamps were extracted from the $FILE_NAME attribute in the MFT...the $STANDARD_INFORMATION attributes for those same files indicate that there was some sort of time stamp manipulation ("timestomping") that occurred, as many of the files have M, A, and B times from 13 and 14 Jul 2009.  However, the date in question that Corey looked at in his blog post was 6 Dec 2012 (the day of the test).  Incorporating Prefetch file metadata and Registry key LastWrite times into a timeline would a pretty tight "grouping" of these artifacts at or "near" the same time.

Another interesting finding in analyzing the MFT is that the "new" services.exe file was MFT record number 42756 (see Corey's blog entry for the original file's record number).  Looking "near" the MFT record number, there are a number of files and folders that are created (and "timestomped") prior to the new services.exe file record being created.  Searching for some of the filenames and paths (such as C:\Windows\Temp\fwtsqmfile00.sqm), I find references to other variants of ZeroAccess.  But what is very interesting about this is the relatively tight grouping of the file and folder creations, not based on time stamps or time stamp anomalies, but instead based on MFT record numbers.

Some take-aways from this...at least what I took away...are:

1. Timeline analysis is an extremely powerful analysis technique because it provides us with context, as well as an increased relative level of confidence in the data we're analyzing.

2. Timeline analysis can be even more powerful when it is not the sole analysis technique, but incorporated into an overall analysis plan.  What about that Prefetch file for services.exe?  A little bit of Prefetch file analysis would have produced some very interesting results, and using what was found through this analysis technique would have lead to other artifacts that should be examined in the timeline.  Artifacts found outside of timeline analysis could be used as search terms or pivot points in a timeline, which would then provide context to those artifacts, which could then be incorporated back into other analyses.

3. Some folks have told me that having multiple tools for creating timelines makes creating timelines too complex a task; however, the tools I tend to create and use are multi-purpose.  For example, I use pref.pl (I also have a 'compiled' EXE) for Prefetch file analysis, as well as parsing Prefetch file metadata into a timeline.  I use RegRipper for parsing (and some modicum of analysis) of Registry hives, as well as to generate timeline data from a number of keys and value data.  I find this to be extremely valuable...I can run a tool, find something interesting in a data set as a result of the analysis, and then run the tool again, against the same data set, but with a different set of switches, and populate my timeline.  I don't need to switch GUIs and swap out dongles.  Also, it's easy to remember the various tools and switches because (a) each tool is capable of displaying its syntax via '-h', and (b) I created a cheat sheet for the tool usage.

4.  Far too often, a root cause analysis, or RCA, is not performed, for whatever reason.  We're losing access to a great deal of data, and as a result, we're missing out on a great deal of intel.  Intel such as, "hey, what this AV vendor wrote is good, but I tested a different sample and found this...".  Perhaps the reason for not performing the RCA is that "it's too difficult", "it takes too long", or "it's not worth the effort".  Well, consider my previous post, Mr. CEO...without an RCA, are you being served?  What are you reporting to the board or to the SEC, and is it correct?  Are you going with, "it's correct to the best of my knowledge", after you went to "Joe's Computer Forensics and Crabshack" to get the work done?

Now, to add to all of the above, take a look at this post from the Sploited blog, entitled Timeline Pivot Points with the Malware Domain List.  This post provides an EXCELLENT example of how timeline analysis can be used to augment other forms of analysis, or vice versa.  The post also illustrates how this sort of analysis can easily be automated.  In fact, this can be part of the timeline creation mechanism....when any data source is parsed (i.e., browser history list, TypedUrls Registry key, shellbags, etc.) have any URLs extracted run in comparison to the MDL, and then generate a flag of some kind within the timeline events file, so that the flag "lives" with the event.  That way, you can search for those events (based on the flag) after the timeline is created, or, as part of your analysis, create a timeline of only those events.  This would be similar to scanning all files in the Temp and system32 folders, looking for PE files with odd headers or mismatched extensions, and then flagging them in the timeline, as well.

Great work to both Corey and Sploited for their posts!

Friday, December 21, 2012

Are You Being Served?

Larry Daniel recently posted to his Ex Forensis blog regarding a very interesting topic, regarding "The Perils of Using the Local Computer Shop for Computer Forensics".  I've thought about this before...when I was on the ISS ERS (and later the IBM ISS ERS) team, on more than one occasion we'd arrive on-site to work with another team, or to take over after someone else had already done some work.  In a couple of instances, I worked with other teams that, while technically skilled, were not full-time DFIR folks.  Larry's post got me to thinking about who is being asked to perform DFIR work, and the overall effect that it has on the industry.

There's a question that I ask myself sometimes, particularly when working on exams...am I doing all I can to provide the best possible product to my customers?  As best I can, I work closely with the customer to establish the goals of the exam, to determine parameters of what they are most interested in.  I do this, because like most analysts, I can spend weeks finding all manner of "interesting" stuff, but my primary interest lies in locating artifacts that pertain to what the customer's interested in, so that I can provide them with what they need  in order to make the decisions that they need to make.  As much as I can, I try to find multiple artifacts to clearly support my findings, and I avoid blanket statements and speculation, as much as I can.

Also, something that I do after every exam is take a look at what I did and what I needed to do, and ask myself if there's a way I could do it better (faster, more comprehensive and complete, etc.) the next time.

Let's take a step away from DFIR work for a moment.  Like many, I make use of other's services.  I own a vehicle, which requires regular upkeep and preventative maintenance.  Sometimes, if all I need is an oil change, I'll go to one of the commercial in-and-out places, because I've looked into the service that they provide, what it entails, and that's all I need at the moment.  However, when it comes to other, perhaps more specialized maintenance...brake work, inspections recommended by the manufacturer, as well as inspections of a trailer I own...I'm going to go with someone I know and trust to do the work correctly.  Another thing I like about working with folks like this is that we tend to develop a relationship where, if during the course of their work, they find something else that requires my attention, they'll let me know, inform me about the issue, and let me make the decision.  After all, they're the experts.

Years ago...1992, in fact...I owned an Isuzu Rodeo.  I'd take it to one of the drive-in places to get the oil changed on a Saturday morning.  The first time I took it to one place, I got an extra charge on my bill for a 4-wheel drive vehicle.  Hold on, I said!  Why are you adding a charge for a 4-wheel drive vehicle, when the vehicle is clearly 2-wheel drive?  The manager apologized, and gave me a discount on my next oil change.  However, a couple of months later, I came back to the same shop with the same vehicle and went through the same thing all over again.  Needless to say, had I relied on the "expertise" of the mechanics, I'd have paid more than I needed to, several times over.  I never went back to that shop again, and from that point on, I made sure to check everything on the list of services performed before paying the bill.

Like many, I own a home, and there are a number of reasons for me to seek services...HVAC, as well as other specialists (particularly as a result of Super Storm Sandy).  I tend to follow the same sort of path with my home that I do with my vehicles...small stuff that I can do myself, I do.  Larger stuff that requires more specialized work, I want to bring in someone I know and trust.  I'm a computer nerd...I'm not an expert in automobile design, nor am I an expert in home design and maintenance.  I can write code to parse Registry data and shell items, but I am not an expert in building codes.

So, the question I have for you, reader, is this...how do you know that you're getting quality work?  To Larry's point, who are you hiring to perform the work? 

At the first SANS Forensic Summit, I was on a panel with a number of the big names in DFIR, several of whom are SANS instructors.  One of the questions that was asked was, "what qualities do you look for in someone you're looking to hire to do DFIR work?"  At the time, my response was simply, "what did they do last week?"  My point was, are you going to hire someone to do DFIR work, if last week they'd done a PCI assessment and the week prior to that, they'd performed a pen test?  Or would you be more likely to hire someone who does DFIR work all the time? I stand by that response, but would add other qualifications to it.  For example, how "tied in" are the examiners?  Do they simply rely on the training they received at the beginning of their careers, or do they continually progress in their knowledge and education?  Do they seek professional improvement and continuing education?  More importantly, do they use it?  Maybe the big question is not so much that the examiners do these things, but do their managers require that the examiners do these things, and make them part of performance evaluations?

Are you being served?

Addendum:  Why does any of this matter?  So what?  Well, something to consider is, what will a CEO be reporting to the board, as well as to the SEC?  Will the report state, "nothing found", or worse, will the report be speculation of a "browser drive-by"?  In my experience, most regulatory organizations want to know the root cause of an issue (such as a compromise or data leakage)...they don't want a laundry list of what the issue could have been.

In addition, consider the costs associated with PCI (or any other sensitive information) data theft; if an organization is compromised, and they hire the local computer repair shop to perform the "investigation", what happens when PCI data is discovered to be involved, or potentially involved?  Well, you have to go pay for the investigation all over again, only this time it's after someone else has come in an "investigated", and this is going to have a potentially negative effect on the final report.  I think plumbers have a special fee for helping folks who have already tried to "fix" something themselves.  ;-)

Look at the services that you currently have in your business.  Benefits management.  Management of a retirement plan.  Payroll.  Do you go out every month and select the lowest bidder to provide these services?  Why treat the information security posture of the your organization this way?

Saturday, December 15, 2012

There are FOUR lights!

Okay, you're probably wondering what Picard and one particular episode of Star Trek TNG has to do with forensicating.  Well, to put it quite simply...everything!

I recently posted the question in a forum regarding Shellbag analysis, and asked who was actually performing it as part of their exams.  One answer I got was, "...I need to start."  When I asked this same question at the PFIC 2012 conference of a roomful of forensicators, two raised their hands...and one admitted that they hadn't done so since SANS training.

I've seen during exams where the shellbags contain artifacts of user activity that are not found anywhere else on the system.  For example, I've seen the use Windows Explorer to perform FTP transfers (my publisher used to have me do this to transfer files), where artifacts of that activity were not found anywhere else on the system.  When this information was added to a timeline, a significant portion of the exam sort of snapped into place, and became crystal clear.

Something I've seen with respect to USB devices that were connected to Windows systems is that our traditional methodologies for parsing this information out of a system are perhaps...incomplete.  I have seen systems where some devices are not so much identified as USB storage devices by Windows systems (rather, they're identified as portable devices...iPods, digital cameras, etc.), and as such, starting by examining the USBStor subkeys means that we may miss some of these devices that could be used in intellectual property theft, as well as the creation and trafficking of illicit images.  Yet, I have seen clear indications of a user's access to these devices within the shellbags artifacts, in part because of my familiarity with the actual data structures themselves.

The creation and use of these artifacts by the Windows operating system goes well beyond just the shellbags, as these artifacts are comprised of data structures known as "shell items", which can themselves be chained together into "shell item ID lists".  Rather than providing a path that consists of several ASCII strings that identify resources such as files and directories, a shell item ID list builds a path to a resource using these data structures, which some in the community have worked very hard to decipher.  What this work has demonstrated is that there is a great deal more information available than most analysts are aware.

So why is understanding shell items and shell item ID lists important? Most of the available tools for parsing shellbags, for example, simply show the analyst the path to the resource, but never identify the data structure in question...they simply provide the ASCII representation to the analyst.  These structures are used in the ComDlg32 subkey values in the NTUSER.DAT hive on Windows Vista and above systems, as well as in the IE Start Menu and Favorites artifacts within the Registry.  An interesting quote from the post:

Of importance to the forensic investigator is the fact that, in many cases, these subkeys and their respective Order values retain references to Start Menu and Favorites items after the related applications or favorites have been uninstalled or deleted.

I added the emphasis to the second half of the quote, because it's important.  Much like other artifacts that are available, references to files, folders, network resources and even applications are retained long after they've been uninstalled or deleted.  So understanding shell items are foundational to understanding larger artifacts.

But doesn't stop with the Registry...shell item ID lists are part of Windows shortcut (LNK) files, which means that they're also part of the Jump Lists found on Windows 7 and 8.

Okay, but so what, right?  Well, the SpiderLabs folks posted a very interesting use of LNK files to gather credentials during a pen test; have any forensic analysts out there seen the use of this technique before?  Perhaps more importantly, have you looked for it?  Would you know how to look for this during an exam?

Here's a really good post that goes into some detail regarding how LNK files can be manipulated with malicious intent, demonstrating how important it is to parse the shell item ID lists.

So, the point of the graphic, as well as of the post overall, is this...if you're NOT parsing shellbags as part of your exam, and if you're NOT parsing through shortcut files as part of your root cause analysis (RCA), then you're only seeing three lights.

There are, in fact, four lights.

Resources
DOSDate Time Stamps in Shell Items
ShellBag Analysis, Revisited...Some Testing