Sunday, December 18, 2016


What's New?
The question I hear perhaps most often, when a new book comes out, or if I'm presenting at a conference, is "what's new?" or "what's the latest and greatest in Windows version <insert upcoming release version>?"

In January, 2012, after attending a conference, I was at baggage claim at my home airport along with a fellow attendee (to the conference)...he asked me about a Windows NT4 system he was analyzing.  In November of 2016, I was involved in an engagement where I was analyzing about half a dozen Windows 2003 server systems.  On an online forum just last week, I saw a question regarding tracking users accessing MSOffice 2003 documents on Windows 2003 systems.

The lesson here is, don't throw out or dismiss the "old stuff", because sometimes all you're left with is the old stuff.  Knowing the process for analysis is much more important than memorizing tidbits and interesting facts that you may or may not ever actually use.

Keeping Up
Douglas Brush recently started recording podcast interviews with industry luminaries, beginning with himself (the array is indexed at zero), and then going to Chris Pogue and David Cowen.  I took the opportunity to listen to the interview with Chris not long ago; he and I had worked together for some time at IBM (I was part of the ISS purchase, Chris was brought over from the IBM team).

Something Chris said during the interview was very poignant; it was one of those things that incident responders know to be true, even if it's not something you've ever stated or specifically crystallized in your own thoughts.  Chris mentioned during the interview that when faced with a number of apparently complex options, non-technical folks will often tend toward those options with which they are most familiar.  This is true not just in information security (Chris mentioned firewalls, IDS, etc.), but also during incident response.  As a responder and consultant, I've seen time and again where it takes some time for the IT staff that I'm working with to understand that while, yes, there is malware on this system, it's there because someone specifically put it there so that they could control it (hands on keyboard) and move laterally within the infrastructure.

Chris's interview was fascinating, and I spent some time recently listening to David's interview, as well.  I had some great take-aways from a lot of the things David said.  For example, a good bit of David's work is clearly related to the courts, and he does a great job of dispelling some of what may be seen as myth, as well as addressing a few simple facts that should (I say "should" because it's not always the case) persist across all DFIR work, whether you're headed to court to testify or not.  David also has some great comments on the value of sharing information within the community.

So far, it's been fascinating to listen to the folks being interviewed, but to be honest, there a lot of women who've done exceptional work in the field, as well, and should not be overlooked.  Mari DeGrazia, Jamie Levy, Cindy Murphy, Sarah Edwards, to name just a few.  These ladies, as well as many others, have had a significant impact on, and continue to contribute to, the community.

I did listen to the Silver Bullet Podcast not long ago, specifically the episode where Lesley Carhart was interviewed.  It's good to get a different perspective on the industry, and I'm not just talking about from a female perspective.  Lesley comes from a background that is much different from mine, so I found listening to her interview very enlightening.

Friday, November 18, 2016

The Joy of Open Source

Not long ago, I was involved in an IR engagement where an intruder had exploited a web-based application on a Windows 2003 system, created a local user account, accessed the system via Terminal Services using that account, run tools, and then deleted the account that they'd created before continuing on using accounts and stolen credentials.

The first data I got to look at was the Event Logs from the system; using evtparse, I created a mini-timeline and got a pretty decent look at what had occurred on the system.  The client had enabled process tracking so I could see the Security/592 and ../593 events, but unfortunately, the additional Registry value had not been created, so we weren't getting full command lines in the event records.  From the mini-timeline, I could "see" the intruder creating the account, using it, and then deleting it, all based on the event record source/ID pairs.

For account creation:
Security/624 - user account created
Security/628 - user account password set
Security/632 - member added to global security group
Security/642 - user account changed

For account deletion:
Security/630 - user account deleted
Security/633 - member removed from global security group
Security/637 - member removed from local security group

Once I was able to access an image of the system, a visual review of the file system (via FTK Imager) confirmed that the user profile was not visible within the active file system.  Knowing that the account had been a local account, I extracted the SAM Registry hive, and ran regslack.exe against it...and could clearly see two keys (username and RID, respectively), and two values (the "F" and "V" values) that had been deleted and were currently "residing" within unallocated space in the hive file.  What was interesting was that the values still included their complete binary data.

I was also able to see one of the deleted keys via RegistryExplorer.

SAM hive open in RegistryExplorer

Not that I needed to confirm it, but I also ran the RegRipper plugin against the hive and ended up finding indications of two other deleted keys, in addition to the previously-observed information.

Output of RR plugin (Excerpt)

Not only that, but the plugin retrieves the full value data for the deleted values; as such, I was able to copy (via Notepad++) code for parsing the "F" and "V" value data out of the plugin and paste it into the plugin for temporary use, so that the binary data is parsed into something intelligible.

The plugin (output below) made it relatively simple to add the deleted key information to a timeline, so that additional context would be visible.

Output of RR plugin

If nothing else, this really illustrates one of the valuable aspects of open source software.  With relatively little effort and time, I was able to incorporate findings directly into my analysis, adding context and clarity to that analysis.  I've modified Perl and Python scripts to meet my own needs, and this is just another example of being able to make quick and easy changes to the available tools in order to meet immediate analysis needs.

Speaking of which, I've gone back and picked up something of a side project that I'd started a bit ago, based on a recent suggestion from a good friend. As I've started to dig into it a bit more, I've run into some challenges, particularly when it comes to "seeing" the data, and translating it into something readable.  Where I started with a hex editor and highlighting a DWORD value at a time, I've ended up writing and cobbling together bits of (open source) code to help me with this task. At first glance, it's like having a bunch of spare parts laying out on a workbench, but closer inspection reveals that it's all basically the same stuff, just being used in different ways.  What started a number of years ago with the header files from Peter Nordahl's ntchpwd utility became the first Registry parsing code that I wrote, which I'm still using to this day.

Some take-aways from this experience...

When a new version of Windows comes out, everyone wants to know what the new 'thing' is...what's the latest and greatest artifact?  But what about the stuff that always works?  What about the old stuff that gets used again and again, because it works?

Understanding the artifact cluster associated with certain actions on the various versions of Windows can help in recognizing those actions when you don't have all of the artifacts available.  Using just the event record source/ID pairs, we could see the creation and deletion of the user account, even if we didn't have process information to confirm it for us.  In addition, the account deletion occurred through a GUI tool (mmc.exe running compmgmt.msc) and all the process creation information would show us is that the tool was run, not which buttons were pushed.  Even without the Event Log record metadata, we still had the information we extracted from unallocated space within the SAM hive file.

Having access to open source tools means that things can be tweaked and modified to suit your needs.  Don't program?  No problem.  Know someone who does?  Are you willing to ask for help?  No one person can know everything, and sometimes it's helpful to go to someone and get a fresh point of view.

Saturday, October 29, 2016


I think that we can all agree, whether you've experienced it within your enterprise or not, ransomware is a problem.  It's one of those things that you hope never happens to you, that you hope you never have to deal with, and you give a sigh of relief when you hear that someone else got hit.

The problem with that is that hoping isn't preparing.

Wait...what?  Prepare for a ransomware attack?  How would someone go about doing that?  Well, consider the quote from the movie "Blade":

Once you understand the nature of a thing, you know what it's capable of.

This is true for ransomware, as well as Deacon Frost.  If you understand what ransomware does (encrypts files), and how it gets into an infrastructure, you can take some simple (relative to your infrastructure and culture, of course) to prepare for such an incident to occur.  Interestingly enough, many of these steps are the same that you'd use to prepare for any type of incident.

First, some interesting reading and quotes...such as from this article:

The organization paid, and then executives quickly realized a plan needed to be put in place in case this happened again. Most organizations are not prepared for events like this that will only get worse, and what we see is usually a reactive response instead of proactive thinking.


I witnessed a hospital in California be shut down because of ransomware. They paid $2 million in bitcoins to have their network back.

The take-aways are "not prepared" and "$2 million"...because it would very likely have cost much less than $2 million to prepare for such attacks.

The major take-aways from the more general ransomware discussion should be that:

1.  Ransomware encrypts files.  That's it.

2.  Like other malware, those writing and deploying ransomware work to keep their product from being detected.

3.  The business model of ransomware will continue to evolve as methods are changed and new methods are developed, while methods that continue to work will keep being used.

Wait...ransomware has a business model?  You bet it does!  Some ransomware (Locky, etc.) is spread either through malicious email attachments, or links that direct a user's browser to a web site.  Anyone who does process creation monitoring on an infrastructure likely sees this.  In a webcast I gave last spring (as well as in subsequent presentations), I included a slide that illustrated the process tree of a user opening an email attachment, and then choosing to "Enable Content", at which point the ransomware took off.

Other ransomware (Samas, Le Chiffre, CryptoLuck) is deployed through a more directed means, bypassing email all together.  An intruder infiltrates an infrastructure through a vulnerable perimeter system, RDP, TeamViewer, etc., and deploys the ransomware in a dedicated fashion.  In the case of Samas ransomware, the adversary appears to have spent time elevating privileges and mapping the infrastructure in order locate systems to which they'd deploy the ransomware.  We've seen this in the timeline where the adversary would on one day, simply blast out the ransomware to a large number of systems (most appeared to be servers).

The Ransomware Economy
There are a couple of other really good posts on Secureworks blog regarding the Samas ransomware (here, and here).  The second blog post, by Kevin Strickland, talks about the evolution of the Samas ransomware; not long ago, I ran across this tweet that let us know that the evolution that Kevin talked about hasn't stopped.  This clearly illustrates that developers are continuing to "provide a better (i.e., less detectable) product", as part of the economy of ransomware.  The business models that are implemented the ransomware economy will continue to evolve, simply because there is money to be had.

There is also a ransomware economy on the "blue" (defender) side, albeit one that is markedly different from the "red" (attacker) side.

The blue-side economy does not evolve nearly as fast as the red-side.  How many victims of ransomware have not reported their incident to anyone, or simply wiped the box and moved on?  How many of those with encrypted files have chosen to pay the ransom rather than pay to have the incident investigated?  By the way, that's part of the red-side economy...make it more cost effective to pay the ransom than the cost of an investigation.

As long as the desire to obtain money is stronger that the desire to prevent that from happening, the red-side ransomware economy will continue to outstrip that of the blue-side.

Preparation for a ransomware attack is, in many ways, no different from preparing for any other computer security incident.

The first step is user awareness.  If you see something, say something.  If you get an odd email with an attachment that asks you to "enable content", don't do it!  Instead, raise an alarm, say something.

The second step is to use technical means to protect yourself.  We all know that prevention works for only so long, because adversaries are much more dedicated to bypassing those prevention mechanisms than we are to paying to keep those protection mechanisms up to date.  As such, augmenting those prevention mechanisms with detection can be extremely effective, particularly when it comes to definitively nailing down the initial infection vector (IIV).  Why is this important?  Well, in the last couple of months, we've not only seen the deliver mechanism of familiar ransomware changing, but we've also seen entirely new ransomware variants infecting systems.  If you assume that the ransomware is getting in as an email attachment, then you're going to direct resources to something that isn't going to be at all effective.

Case in point...I recently examined a system infected with Odin Locky, and was told that the ransomware could not have gotten in via email, as a protection application had been purchased specifically for that purpose.  What I found was that the ransomware did, indeed, get on the system via email; however, the user had accessed their AOL email (bypassing the protection mechanism), and downloaded and executed the malicious attachment.

Tools such as Sysmon (or anything else that monitors process creation) can be extremely valuable when it comes to determining the IIV for ransomware.  Many variants will delete themselves after files are encrypted, (attempt to) delete VSCs, etc., and being able to track the process train back to it's origin can be extremely valuable in preventing such things in the future.  Again, it's about dedicating resources where they will be the most effective.  Why invest in email protections when the ransomware is getting on your systems as a result of a watering hole attack, or strategic web compromise?  Or what if it's neither of those?  What if the system had been compromised, a reverse shell (or some other access method, such as TeamViewer) installed and the system infected through that vector?

Ransomware will continue to be an issue, and new means for deploying are being developed all the time.  The difference between ransomware and, say, a targeted breach is that you know almost immediately when you've had files encrypted.  Further, during targeted breaches, the adversary will most often copy your critical files; with ransomware, the files are made unavailable to anyone.  In fact, if you can't decrypt/recover your files, there's really no difference between ransomware and secure deletion of your files.

We know that on the blue-side, prevention eventually fails.  As such, we need to incorporate detection into our security posture, so that if we can't prevent the infection or recover our files, we can determine the IIV for the ransomware and address that issue.

Addendum, 30 Oct: As a result of an exchange with (and thanks to) David Cowen, I think that I can encapsulate the ransomware business model to the following statement:

The red-side business model for ransomware converts a high number of low-value, blue-side assets into high-value attacker targets, with a corresponding high ROI (for the attacker).

What does mean?  I've asked a number of folks who are not particularly knowledgeable in infosec if there are any files on their individual systems without which they could simply not do their jobs, or without access to those files, their daily work would significantly suffer.  So far, 100% have said, "yes".  Considering this, it's abundantly clear that attackers have their own reciprocal Pyramid of Pain that they apply to defenders; that is, if you want to succeed (i.e., get paid), you need to impact your target in such a manner that it is more cost-effective (and less painful) to pay the ransom than it is perform any alternative.  In most cases, the alternative amounts to changing corporate culture.


I was working on an incident recently, and while extracting files from the image, I noticed that there was an AmCache.hve file.  Not knowing what I would find in the file, I extracted it to include in my analysis.  As I began my analysis, I found that the system I was examining was a Windows Server 2012 R2 Standard system.  This was just one system involved in the case, and I already had a couple of indicators.

As part of my analysis, I parsed the AppCompatCache value and found one of my indicators:

SYSVOL\downloads\malware.exe  Wed Oct 19 15:35:23 2016 Z

I was able to find a copy of the malware file in the file system, so I computed the MD5 hash, and pulled the PE compile time and interesting strings out of the file.  The compile time was  9 Jul 2016, 11:19:37 UTC.

I then parsed the AmCache.hve file and searched for the indicator, and found:

File Reference  : 28000017b6a
LastWrite          : Wed Oct 19 06:07:02 2016 Z
Path                   : C:\downloads\malware.exe
SHA-1               : 0000
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z

File Reference   : 3300001e39f
LastWrite           : Wed Oct 19 15:36:07 2016 Z
Path                    : C:\downloads\malware.exe
SHA-1                : 0000
Last Mod Time2: Wed Oct 19 15:35:23 2016 Z

File Reference  : 2d000017b6a
LastWrite          : Wed Oct 19 06:14:30 2016 Z
Path                   : C:\Users\\Desktop\malware.exe
SHA-1               : 0000
Last Mod Time  : Wed Aug  3 13:36:54 2016 Z
Last Mod Time2: Wed Aug  3 13:36:53 2016 Z
Create Time       : Wed Oct 19 06:14:20 2016 Z
Compile Time    : Sat Jul  9 11:19:37 2016 Z

All of the SHA-1 hashes were identical across the three entries.  Do not ask for the hashes...I'm not going to provide them, as this is not the purpose of this post.

What this illustrates is the value of what what can be derived from the AmCache.hve file.  Had I not been able to retrieve a copy of the malware file from the file system, I would still have a great deal of information about the file, including (but not limited to) the fact that the same file was on the file system in three different locations.  In addition, I would also have the compile time of the executable file.

Sunday, October 16, 2016

Links and Updates

RegRipper Plugin
Not long ago, I read this blog post by Adapt Forward Cyber Security regarding an interesting persistence mechanism, and within 10 minutes, had RegRipper plugin written and tested against some existing data that I had available.

So why would I say this?  What's the point?  The point is that with something as simple as copy-paste, I extended the capabilities of the tool, and now have new functionality that will let me flag something that may be of interest, without having to memorize a checklist.  And as I pushed the new plugin out to the repository, everyone who downloads and uses the plugin now has that same capability, without having to have spent the time that the folks at Adapt Forward spent on this; through documentation and sharing, the DFIR community is able to extend the functionality of existing toolsets, as well as the reach of knowledge and experience.

Speaking of which, I was recently assisting with a case, and found some interesting artifacts in the Registry regarding LogMeIn logons; they didn't include the login source (there was more detail recovered from a Windows Event Log record), but they did include the user name and date/time.  This was the result of a creating a timeline that included Registry key LastWrite times, and led to investigating an unusual key/entry in the timeline.  I created a RegRipper plugin to extract the information (, and then created one to include the artifact in a timeline (  Shorty after creating them both, I pushed them up to Github.

Extending Tools, Extending Capabilities
Not long ago, I posted about parsing .pub files that were used to deliver malicious macros.  There didn't seem to be a great deal of interest from the community, but hey, what're you gonna do, right?  One comment that I did receive was, "yeah, so's a limited infection vector."  You know what?  You're right, it is.  But the point of the post wasn't, "hey, look here's a new thing..."; it was "hey, look, here's an old thing that's back, and here's how, if you understand the details of the file structure, you can use that information to extend your threat intel, and possibly even your understanding of the actors using it."

And, oh, by the way, if you think that OLE is an old format, you're right...but if you think that it's not used any longer, you're way not right.  The OLE file format is used with Sticky Notes, as well as automatic Jump Lists.

Live Imaging
Mari had an excellent post recently in which she addressed live imaging of Mac systems.  As she pointed out in her post, there are times when live imaging is not only a good option, but the only option.

The same can also be true for Windows systems, and not just when encryption is involved.  There are times when the only way to get an image of a server is to do so using a live imaging process.

Something that needs to be taken into consideration during the live imaging of Windows systems is the state of various files and artifacts while the system is live and running.  For example, Windows Event Logs may be "open", and it's well known that the AppCompatCache data is written at system shutdown.

Not long ago, I commented regarding my experiences using the AmCache.hve file during investigations; in short, I had not had the same sort of experiences as those described by Eric Z.

That's changed.

Not long ago, I was examining some data from a point-of-sale breach investigation, and had noticed in the data that there were references to a number of tools that the adversary had used that were no longer available on the system.  I'd also found that the installed AV product wasn't writing detection events to the Application Event Log (as many such applications tend to do...), so I ran 'strings' across the quarantine index files, and was able to get the original path to the quarantined files, as well as what the AV product had alerted on.  In one instance, I found that a file had been identified by the AV product as "W32.Bundle.Toolbar"...okay, not terribly descriptive.

I parsed the AmCache.hve file (the system I was examining was a Windows 7 SP1 system), and searched the output for several of the file names I had from other sources (ShimCache, UserAssist, etc.), and lo and behold, I found a reference to the file mentioned above.  Okay, the AmCache entry had the same path, so I pushed the SHA-1 hash for the file up to VT, and the response identified the file as CCleaner.  This fit into the context of the examination, as we'd observed the adversary "cleaning up", using either native tools (living off the land), or using tools they'd brought with them.

Windows Event Log Analysis
Something I see over and over again (on Twitter, mostly, but also in other venues) is analysts referring to Windows Event Log records solely by their event ID, and not including the source.

Event IDs are not unique.  There are a number of event IDs out there that have different sources, and as such, have a completely different context with respect to your investigation.  Searching Google, it's easy to see (for example) that events with ID 4000 have multiple sources; DNS, SMTPSvc, Diagnostics-Networking, etc.  And that doesn't include non-MS applications...that's just what I found in a couple of seconds of searching.  So, searching across all event logs (or even just one event log file) for all events with a specific ID could result in data that has no relevance to the investigation, or even obscure the context of the investigation. what?  Who cares?  Well, something that I've found that really helps me out with an examination is to use eventmap.txt to "tag" events of interest ("interest", as in, "found to be interesting from previous exams") while creating a timeline.  One of the first things I'll do after opening the TLN file is to search for "[maldetect]" and "[alert]", and get a sense of what I'm working with (i.e., develop a bit of situational awareness).  This works out really well because I use the event source and ID in combination to identify records of interest.

As many of us still run across Windows XP and 2003 systems, this link provides a good explanation (and a graphic) of how wrapping of event records works in the Event Logs on those systems.

Thursday, September 22, 2016

Size Matters

Yes, it does, and sometimes smaller is better.

Here's why...the other day I was "doing" some analysis, trying to develop some situational awareness from an image of a Windows 2008 SP2 system.  To do so, I extracted data from the listing of the partition via FTK Imager, Windows Event Logs, and Registry hive files.  I then used this data to create a micro-timeline (one based on limited data) so that I could just get a general "lay of the land", if you will.

One of the things I did was open the timeline in Notepad++, run the slider bar to the bottom of the file, and search (going "up" in the file) for "Security-Auditing/".  I did this to see where the oldest event from the Security Event Log would be located.  Again, I was doing this for situational awareness.

Just to keep track, from the point where I had the extracted data sources, I was now just under 15 min into my analysis.

The next thing I did was go all the way back to the top of the file, and I started searching for the tags included in eventmap.txt.  I started with "[maldetect]", and immediately found clusters of malware detections via the installed AV product.

Still under 18 min at this point.

Then I noticed something interesting...there was as section of the timeline that had just a bunch of failed login attempts (Microsoft-Windows-Security-Auditing/4625 events), all of them type 10 logins.  I knew that one of the things about this case was unauthorized logins via Terminal Services, and seeing the failed login attempts helped me narrow down some aspects of that; specifically, the failed login attempts originated from a limited number of IP addresses, but there were multiple attempts, many using user names that didn't exist on the system...someone was scanning and attempting to brute force a login.

I already knew from the pre-engagement conference calls that there were two user accounts that were of primary was a legit account the adversary had taken over, the other was one the adversary had reportedly created.  I searched for one of those and started to see "Microsoft-Windows-Security-Auditing/4778" (session reconnect) and /4779 (session disconnect) events.  I had my events file, so I typed the commands:

type events.txt | find "Microsoft-Windows-Security-Auditing/4778" > sec_events.txt
type events.txt | find "Microsoft-Windows-Security-Auditing/4779" >> sec_events.txt

From there, I wrote a quick script that ran through the sec_events.txt file and gave me a count of how many times various system names and IP addresses appeared together.  From the output of the script, I could see that for some system names, ones that were unique (i.e., "Hustler", etc., but NOT "Dell-PC") were all connecting from the same range of IP addresses.

From the time that I had the data available, to the point where I was looking at the output of the script was just under 45 min.  Some of that time included noodling over how best to present what I was looking for, so that I didn't have to go through things manually...make the code do alphabetical sorting rather than having to it myself, that sort of thing.

The point of all this is that sometimes, you don't need a full system timeline, using all of the available data, in order to make headway in your analysis.  Sometimes a micro-timeline is much better, as it doesn't include all the "noise" associated with a bunch of unrelated activity.  And there are times when a nano-timeline is a vastly superior resource.

As a side note, after all of this was done, I extracted the NTUSER.DAT files for the two user profiles of interest from the image, added the UserAssist information from each of them to the main events file, and recreated the original timeline with new time to do that was less than 10 min, and I was being lazy.  That one small action really crystallized the picture of activity on the system.

Addendum, 27 Sept:
Here's another useful command line that I used to get logon data:

type events.txt | find "Security-Auditing/4624" | find "admin123" | find ",10"

Monday, September 19, 2016


Malicious Office Documents
Okay, my last post really doesn't seem to have sparked too much interest; it went over like a sack of hammers.  Too bad.  Personally, I thought it was pretty fascinating, and can see the potential for additional work further on down the road.  I engaged in the work to help develop a clearer threat intel picture, and there has simply been no interest.  Oh, well.

Not long ago, I found this pretty comprehensive post regarding malicious Office documents, and it covers both pre- and post-2007 formats.

What's in your WPAD?
At one point in my career, I was a security admin in an FTE position within a company.  One of the things I was doing was mapping the infrastructure and determining ingress/egress points, and I ran across a system that actually had persistent routes enabled via the Registry.  As such, I've always tried to be cognizant of anything that would redirect a system to a route or location other than what was intended.  For example, when responding to an apparent drive-by downloader attack, I'd be sure to examine not only the web history but also the user Favorites or Bookmarks; there have been several times where doing this sort of analysis has added a slightly different shade to the investigation.

Other examples of this include things like modifications to the hosts file.  Windows uses the hosts file for name resolution, and I've used this in conjunction with a Scheduled Task, as a sort of "parental control", leaving the WiFi up after 10pm for a middle schooler, but redirecting some sites to localhost.  Using that knowledge over the years, I've also examined the hosts file for indicators of untoward activity; I even had a plugin for the Forensic Scanner that would automatically extract any entries in the hosts file what was other than the default.  Pretty slick.

Not new...this one is over four years old...but I ran across this post on the NetSec blog, and thought that it was worth mentioning.  Sometimes, you just have to know what you're looking for when performing incident response, and sometimes what you're looking for isn't in memory, or in a packet capture.

Speaking of checking things related to the browser, I saw something that @DanielleEveIR tweeted recently, specifically:

I thought this was pretty interesting, not something I'd seen or thought of before.  Unfortunately, many of the malware RE folks I know are focused more on the network than the host, so things such as modifications of Registry values tend to fall through the cracks.  However, if you're running Carbon Black, this might make a pretty good watchlist item, eh?

I did a search and found a malware sample described here that exhibits this behavior, and I found another description here.  Hopefully, that might provide some sort of idea as to how pervasive this artifact is.

@DanielleEveIR had another interesting tweet, stating that if your app makes a copy of itself and then launches the copy, it might be malware.  Okay, she's starting to sound like the Jeff Foxworthy of IR..."you might be malware if..."...but she has a very good point.  Our team recently saw the LaZagne credential theft tool being run in an infrastructure, and if you've ever seen or tested this, that's exactly what it does.  This would make a good watchlist item, as well...regardless of what the application name is, if the process name is the same as the parent process name, flag that puppy!  You can also include this in any script that you use that parses Security Event Logs (for event ID 4688) or Sysmon Event Logs.

Defender Bias
There've been a number of blog posts that have discussed analyst bias when it comes to DFIR, threat intel, and attribution.

Something that I haven't seen discussed much is blue team or defender bias.  Wait...what?  What is "defender bias"?  Let's look at some're sitting in a meeting, discussing an incident that your team is investigating, and you're fully aware that you don't have all the data at this point.  You're looking at a few indicators, maybe some files and Windows Event Log records, and then someone says, "...if I were the bad guy, I'd...".  Ever have that happen?  Being an incident responder for about 17 years, I've heard that phrase spoken.  A lot.  Sometimes by members of my team, sometimes by members of the client's team.

Another place that defender bias can be seen is when discussing "crown jewels".  One of the recommended exercises while developing a CSIRP is to determine where the critical data for the organization is located within the infrastructure, and then develop response plans around that data. The idea of this exercise is to accept that breaches are inevitable, and collapse the perimeter around the critical data that the organization relies on to function.

But what happens when you don't have the instrumentation and visibility to determine what the bad guy is actually doing?  You'll likely focus on protected that critical data while the bad guy is siphoning off what they came for.

The point is that what may be critical to you, to your business, may not be the "crown jewels" from the perspective of the adversary.  Going back as far as we can remember, reports from various consulting organizations have referred to the adversary as having a "shopping list", and while your organization may be on that list, the real question isn't just, "..where are your critical assets?", it's also "...what is the adversary actually doing?"

What if your "crown jewels" aren't what the adversary is after, and your infrastructure is a conduit to someone else's infrastructure?  What if your "crown jewels" are the latest and greatest tech that your company has on the drawing boards, and the adversary is instead after the older gen stuff, the tech shown to work and with a documented history and track record of reliability?   Or, what if your "crown jewels" are legal positions for clients, and the adversary is after your escrow account?

My point is that there is going to be a certain amount of defender bias in play, but it's critical for organizations to have situational awareness, and to also realize when there are gaps in that situational awareness.  What you decide are the "crown jewels", in complete isolation from any other input, may not be what the adversary is after.  You'll find yourself hunkered down in your Maginot Line bunkers, awaiting that final assault, only to be mystified when it never seems to come.

Sunday, September 11, 2016


Okay, if you've never seen The Replacements then the title of this post won't be nearly as funny to you as it is to me...but that's okay.

I recently posted an update blog that included a brief discussion of a tool I was working on, and why.  In short, and due in part to a recently publicized change in tactics, I wanted to dust off some old code I'd written and see what information or intel I could collect.

The tactic I'm referring to involves the use of malware delivered via '.pub' files.  I wasn't entirely too interested in this tactic until I found out that .pub (MS Publisher) files are OLE format files.

The code I'm referring to is, something I wrote a while back (according to the header information, the code is just about 10 yrs old!) and was written specifically to parse documents created using older versions of MS Word, specifically those that used OLE.

The Object Linking and Embedding (OLE) file format is pretty well documented at the MS site, so I won't spend a lot of time discussing the details here.  However, I will say that MS has referred to the file format as a "file system within a file", and that's exactly what it is.  If you look at the format, there's actually a 'sector allocation table', and it's laid out very similar to the FAT file system.  Also, at some levels of the 'file system' structure, there are time stamps, as well.  Now, the exact details of when and how these time stamps are created and/or modified (or if they are, at all) isn't exactly clear, but they can serve as an indicator, and something that we can incorporate with other artifacts such that when combining them with context, we can get a better idea of their validity and value.

For most of us who have been in the IR business for a while, when we hear "OLE", we think of the Blair document, and in particular, the file format used for pre-2007 versions of MS Office documents.  Further, many of us thought that with the release of Office 2007, the file format was going to disappear, and at most, we'd maybe have to dust off some tools or analysis techniques at some point in the future.  Wow, talk about a surprise!  Not only did the file format not disappear, as of Windows 7, we started to see it being used in more and more of the artifacts we were seeing on the system.  Take a look at the OLE Compound File page on the ForensicWiki for a list of the files on Windows systems that utilize the OLE file format (i.e., StickyNotes, auto JumpLists, etc.).  So, rather than "going away", the file format has become more pervasive over time.  This is pretty fascinating, particularly if you have a detailed understanding of the file structure format.  In most cases when you're looking at these files on a Windows system, the contents of the files will be what you're most interested in; for example, with automatic Jump Lists, we may be most interested in the DestList stream.  However, when an OLE compound file is created off of the system, perhaps through the use of an application, we (as analysts) would be very interested in learning all we can about the file itself.

So, the idea behind the tool I was working on was to pull apart one component of the overall attack to see if there were any correlations to the same component with respect other attacks.  I'm not going to suggest it's the same thing (because it's not) but the idea I was working from is similar to pulling a device apart and breaking down its components in order to identify the builder, or at the very least to learn a little bit more that could be applied to an overall threat intel picture.

Here's what we're looking this case, the .pub files are arriving as email attachments, so you have a sender email address, contents of the email header and body, attachment name, etc.  All of this helps us build a picture of the threat.  Is the content of the email body pretty generic, or is it specifically written to illicit the desired response (opening the attachment) from the user to whom it was sent?  Is it targeted?  Is it spam or spear-phishing/whaling?

Then we have what occurs after the user opens the attachment; in some cases, we see that files are downloaded and native commands (i.e., bitsadmin.exe) are executed on the system.  Some folks have already been researching those areas or aspects of the overall attacks, and started pulling together things such as sites and files accessed by bitsadmin.exe, etc.

Knowing a bit about the file format of the attachment, I thought I'd take an approach similar to what Kevin talked about in his Continuing Evolution of Samas Ransomware blog post.  In particular, why not see if I could develop some information that could be mapped to other aspects of the attacks?  Folks were already using Didier's to extract information about the .pub files, as well as extract the embedded macros, but I wanted to take a bit of a closer look at the file structure itself.  As such, I collected a number of .pub files that were known to be malicious in nature and contain embedded macros (using open sources), and began to run the tool I'd written ( across the various files, looking not only for commonalities, but differences, as well.  Here are some of the things I found:

All of the files had different time stamps; within each file, all of the "directory" streams had the same time stamp.  For example, from one file:

Root Entry  Date: 30.06.2016, 22:03:16

All of the "directory" streams below the Root Entry had the same time stamp, as illustrated in the following image (different file from the one with 30 June time stamps):
.pub file structure listing

Some of the files had a populated "Authress:" entry in the SummaryInformation section.  However, with the exception of those files, the SummaryInformation and DocumentSummaryInformation streams were blank.

All of the files had Trash sections (again, see the document structure specification) that were blank.
Trash Sections Listed
For example, in the image to the left, we see the tool listing the Trash sections and their sizes; for each file examined, the File Space section was all zeros, and the System Space section was all "0xFFFF".  Without knowing more about how these sections are managed, it's difficult to determine specifically if this is a result of the file being created by whichever application was used (sort of a 'default' configuration), or if this is the result of an intentional action.

Many (albeit not all) files contained a second stream with an embedded macro.  In all cases within the sample set, the stream was named "Module1", and contained an empty function.  However, in each case, that empty function had a different name.

Some of the streams of all of the files were identical across the sample set.  For example, the \Quill\QuillSub\ \x01CompObj stream for all of the files appears as you see in the image below.
\Quill\QuillSub\ \x01CompObj stream

All in all, for me, this was some pretty fascinating work.  I'm sure that there may be even more information to collect with a larger sample set.  In addition, there's more research to be done...for example, how do these files compare to legitimate, non-malicious Publisher files?  What tools can be used to create these files?

Wednesday, September 07, 2016

More Updates

Mari had a great post recently that touched on the topic of timelines, which also happens to be the topic of her presentation at the recent HTCIA conference (which, by all Twitter accounts, we very well received).

A little treasure that Mari added to the blog post was how she went about modifying a Volatility plugin in order to create a new one.  Mari says in the post, "...nothing earth shattering...", but you know what, sometimes the best and most valuable things aren't earth shattering at all.  In just a few minutes, Mari created a new plugin, and it also happens to be her first Volatility plugin.  She shared her process, and you can see the code right there in the blog post.

Speaking of sharing..well, this has to do with DFIR in general, but not Windows specifically...I ran across this fascinating blog post recently.  In short, the author developed a means (using Python) for turning listings of cell tower locations (pulled from phones by Cellebrite) into a Google Map.

A while back, I'd written and shared a Perl script that did something similar, except with WiFi access points.

The point is that someone had a need and developed a tool and/or process for (semi-)automatically parsing and processing the original raw data into a final, useful output format.

.pub files
I ran across this ISC Handler Diary recently...pretty interesting stuff.  Has anyone seen or looked at this from a process creation perspective?  The .pub files are OLE compound "structured storage" files, so has anyone captured information from endpoints (IRL, or via a VM) that illustrates what happens when these files are launched?

For detection of these files within an acquired image, there are some really good Python tools available that are good for generally parsing OLE files.  For example, there's Didier's, as well as decalage/oletools.  I really like, and have tried using it for various testing purposes in the past, usually using files either from actual cases (after the fact), or test documents downloaded from public sources.

while back I wrote some code (i.e., specifically to parse OLE structured storage files, so I modified that code to essentially recurse through the OLE file structure, and when getting to a stream, simply dump the stream to STDOUT in a hex-dump format.  However, as I'm digging through the API, there's some interesting information available embedded within the file structure itself.  So, while I'm using Didier's as a comparison for testing, I'm not entirely interested in replicating the great work that he's done already, as much as I'm looking for new things to pull out, and new ways to use the information, such as pulling out date information (for possible inclusion in a timeline, or inclusion in threat intelligence), etc.

So I downloaded a sample found on VirusTotal, and renamed the local copy to be more inline with the name of the file that was submitted to VT.

Here's the output of, when run across the downloaded file: output

Now, here's the output from, the current iteration of the OLE parsing tool that I'm working on, when run against the same file: output

As you can see, there are more than a few differences in the outputs, but that doesn't mean that there's anything wrong with either tool.  In fact, it's quite the opposite. uses a different technique for tracking the various streams in the file; uses the designators from within the file itself.

The output of has 5 columns:
- the stream designator (from within the file itself)
- a tuple that tells me:
   - is the stream a "file" (F) or a "directory" (D)
   - if the stream contains a macro
   - the "type" (from here); basically, is the property a PropertySet?
- the date (OLE VT_DATE format is explained here)

Part of the reason I wrote this script was to see which sections within the OLE file structure had dates associated with them, as perhaps that information can be used as part of building the threat intel picture of an incident.  The script has embedded code to display the contents of each of the streams in a hex-dump format; I've disabled the code as I'm considering adding options for selecting specific streams to dump.

Both tools use the same technique for determining if macros exist in a stream (something I found at the VBA_Tools site).

One thing I haven't done is added code to look at the "trash" (described here) within the file format.  I'm not entirely sure how useful something like this would be, but hey, it may be something worth looking at.  There are still more capabilities I'm planning to add to this tool, because what I'm looking at is digging into the structure of the file format itself in order to see if I can develop indicators, which can then be clustered with other indicators.  For example, the use of .pub file attachments has been seen by others (ex: MyOnlineSecurity) being delivered via specific emails.  At this point, we have things such as the sender address, email content, name of the attachment, etc.  Still others (ex: MoradLabs) have shared the results of dynamic analysis; in this case, the embedded macro launching bitsadmin.exe with specific parameters.  Including attachment "tooling" may help provide additional insight into the use of this tactic by the adversary.

Something else I haven't implemented (yet) is extracting and displaying the macros.  According to this site, the macros are compressed, and I have yet to find anything that will let me easily extract and decompress a macro from the stream in which its embedded, using Perl.  Didier has done it in Python, so perhaps that's something I'll leave to his tool.

Threat Intel
I read this rather fascinating Cisco Continuum article recently, and I have to say, I'm still trying to digest it.  Part of the reason for this is that the author says some things I agree with, but I need to go back and make sure I understand what they're saying, as I might be agreeing while at the same time misunderstanding what's being said.

A big take-away from the article was:

Where a team like Cisco’s Talos and products like AMP or SourceFire really has the advantage, Reid said, is in automating a lot of these processes for customers and applying them to security products that customers are already using. That’s where the future of threat intelligence, cybersecurity products and strategies are headed.

Regardless of the team or products, where we seem to be now is that processes are being automated and applied to devices and systems that clients are already using, in many cases because the company sold the client the devices, as part of a service.

Saturday, September 03, 2016


Registry Settings
Microsoft TechNet had a blog post recently regarding malware setting the IE proxy settings by modifying the "AutoConfigURL" value.  That value sounded pretty familiar, so I did a quick search of my local repository of plugins:

C:\Perl\rr\plugins>findstr /C:"autoconfigurl" /i *.pl

I came up with one plugin,, that contained the value name, and specifically contained this line:  20130328 - added "AutoConfigURL" value info

Pretty fascinating to have an entry added to a plugin 3 1/2 years ago still be of value today.

Speaking of the Registry, I saw a tweet recently indicating that the folks at Volatility had updated the svcscan module to retrieve the service setting for what happens when the services fails to start.  This is pretty interesting...I haven't seen this value having been set or used during an engagement.  However, I am curious...what Windows Event Log record is generated when a service fails to start?  Is there more than one?  According to Microsoft, there are a number of event IDs related to service failures of various types.  The question is, is there one (or two) in particular that I should look for, with respect to this Registry value? For example, if I were creating a timeline of system activity and included the contents of the System Event Log, which event IDs (likely from source "Service Control Manager") would I be most interested in, in this case?

Outlook Rulez
Jamie (@gleeda) recently RT'd an interesting tweet from MWR Labs regarding a tool that they'd created to set malicious Outlook rules for persistence.

I was going to say, "hey, this is pretty fascinating...", but I can't.  Here's why...SilentBreak Security talked about malicious Outlook rules, as have others.  In fact, if you read the MWRLabs blog post, you'll see that this is something that's been talked about and demonstrated going back as far as 2008...and that's just the publicly available stuff.

Imagine compromising an infrastructure and leaving something in place that you could control via a text message or email.  Well, many admins would look at this and think, "well, yeah, but you need this level of access, and you then have to have this..."...well, now it's just in a single .exe file, and can be achieved with command line access.

I know, this isn't the only way to do this...and apparently, it's not the only tool available to do it, either.  However, it does represent a means for retaining access to a high-value infrastructure, even following eviction/eradication.  Imagine working very hard (and very long hours) to scope, contain, and clean up after a breach, only to have the adversary return, seemingly at will.  And to make matters even more difficult, the command used could easily include a sleep() function, so it's even harder to tie back the return to the infrastructure to a specific event without knowing that the rule exists.

Doing threat hunting and threat response, I see web shells being used now and again.  I've seen web shells as part of a legacy breach that predated the one we were investigating, and I've seen infrastructures rife with web shells.  I've seen a web shell used internally within an infrastructure to move laterally (yeah, no idea about that one, as to "why"...), and I've seen where an employee would regularly find and remove the web shell but not do an RCA and remediate the method used to get the web shell on the system; as long as the vulnerability persists, so does the adversary.

I caught this pretty fascinating description of a new variant of the China Chopper web shell, called "CKnife", recently.  I have to wonder, has anyone else seen this, and if so, do you know what it "looks like" if you're monitoring process creation events (via Carbon Black, Sysmon, etc.) on the system?

Tool Testing
Eric Zimmerman recently shared the results of some pretty extensive tool testing via his blog, and appeared on the Forensic Lunch with David and Matt, to discuss his testing.  I applaud and greatly appreciate Eric's efforts in putting this was clearly a great deal of work to set the tests up, run them, and then document them.

I am, however, one of those folks who won't find a great deal of value in all of that work.  I can't say that I've done, or needed to do, a keyword search in a long time.  When I have had to perform a keyword search, or file carving, I tend to leave long running tasks such as these for after-hours work, so if it finishes in 6 1/2 hrs, or 8 hrs, really isn't too big a deal for me.

A great deal of the work that I do involves creating timelines of all sizes (Mari recently gave a talk on creating mini-timelines at HTCIA).  I've created timelines from just the contents of the Security Event Log, to using metadata from multiple sources (file system, Windows Event Logs, Registry, etc.) from within an image.  If I need to illustrate how and when a user account was used to log into a system, then download and view image files, I don't need a full commercial suite to do this...and a keyword search isn't going to give me any thing of value.

That's not to say that I haven't looked for specific strings.  However, when I do, I tend to take a targeted approach, extracting the data source (pagefile, unallocated space, etc.) from the image, running strings, and then searching for specific items (strings, substrings, etc.) in the output.

Again...and don't misunderstand...I think that what Eric did was some really great work.  He would not have done it if it wasn't of significant value to him, and I have no doubt in my mind that he shared it because it will have significant value to others.  

Sunday, August 28, 2016

Links and Updates

Corporate Blogs
Two cool things about my day job is that I see cool things, and get to share some of what is seen through the SecureWorks corporate blog.  Most of my day job can be described as DFIR and threat hunting, and all of the stuff that goes into doing those things.  We see some pretty fascinating things and it's really awesome that we get to share them.

Some really good examples of stuff that our team has seen can be found here, thanks to Phil. Now and again, we see stuff and someone will write up a corporate blog post to share what we saw.  For example, here's an instance where we saw an adversary create and attempt to access a new virtual machine.  Fortunately, the new VM was created on a system that was itself a the new VM couldn't be launched.

In another example, we saw an adversary launch an encoded and compressed PowerShell script via a web shell, in order to collect SQL system identifiers and credentials.  The adversary had limited privileges and access via the web shell (it wasn't running with System level privileges), but may have been able to use native tools to run commands at elevated privileges on the database servers.

Some other really good blog posts include (but are not limited to):
A Novel WMI Persistence Implementation
The Continuing Evolution of Samas Ransomware (I really like this one...)
Ransomware Deployed by Adversary with Established Foothold
Ransomware as a Distraction

I watched Ryan Nolette's BSidesBoston2016 presentation recently, in part because the title and description caught my attention.  However, at the end of the presentation, I was mystified by a couple of things, but some research and asking some questions cleared it up.  Ryan's presentation was based on a ransomware sample that had been discussed on the Cb blog on 3 Aug by the time the BSides presentation went on, the blog post was almost a year old.

During the presentation, Ryan talked about bad guys using vshadow.exe (I found binaries here) to create a persistent shadow copy, mounting that copy (via mklink.exe), copying malware to the mounted VSC and executing it, and then deleting all VSCs.  Ryan said that after all of that, the malware was still running.  However, the process discussed in the presentation wasn't quite right...if you want the real process, you need to look at this Cb blog post from 5 Aug 2015.

This is a pretty interesting technique, and given that it was discussed last year (it was likely utilized and observed prior to that) it makes me wonder if perhaps I've missed it in my own investigations...which then got me to thinking, how would I find this during a DFIR investigation?  Ryan was pretty clear as to how he uses Cb to detect this sort of activity, but not all endpoint tools have the same capabilities as Cb.  I'll have to look into some further testing to see about how to detect this sort of activity through analysis of an acquired image.

Saturday, August 13, 2016

LANDesk in the Registry

LANDesk in the Registry
Some of my co-workers recently became aware of information maintained in the Windows Registry by the LANDesk softmon utility, which is pretty fascinating when you look at it.  The previously-linked post states that, "LANDesk Softmon.exe monitors application execution..." not just installed applications, or just services, but application execution.  The post goes on to state:

Unfortunately, if an application is no longer available the usage information still lives on in the registry.

This goes back to what I've said before about indicators on Windows systems, particularly within the Registry, persisting beyond the deletion or removal of the application, which is pretty awesome.

The softmon utility maintains some basic information about the executed apps within the Software hive, with subkeys named for the path to the app.  The path to the keys in question is:

      MonitorLog\<path to executed app>

Information maintained within the keys includes the following values:
  • Current User
  • First Started
  • Last Started
  • Last Duration
  • Total Duration
  • Total Runs
This site provides information regarding how to decode some of the data associated with the above values; the "First Started" and "Last Started" values are FILETIME objects, and therefore pretty trivial to parse.

This information isn't nearly as comprehensive as something like Sysmon, of course, but it's much better than nothing.

Sysforensics posted a LANDesk Registry Entry Parser script on GitHub, about 2 yrs ago.  Don Weber wrote the original RegRipper plugin back in 2009, and I made some updates to it in 2013.  There's also a plugin that incorporates the data into a timeline.

Wednesday, August 10, 2016


Data Exfil
A question that analysts get from time to time is "was any data exfiltrated from this system?"  Sometimes, this can be easy to determine; for example, if the compromised system had a web server running (and was accessible from the Interwebs), you might find indications of GET requests for unusual files in the web server logs.  You would usually expect to find something like this if the bad guy archived whatever they'd collected, moved the archive(s) into a web folder, and then issued a GET request to download the archive to their local system.  In many cases, the bad guy has then deleted the archive.  With no instrumentation on the system, the only place you will find any indication of this activity is in the web server logs.

However, for the most part, definitive determination of data exfiltration is almost impossible without the appropriate instrumentation; either having a packet sniffer on the wire at the time of the transfer, or appropriate endpoint agent monitoring process creation events (see the Endpoints section below) in order to catch/record command lines.  In the case of endpoint monitoring, you'd likely see the use of an archiving tool, and little else until you moved to the web server logs (given the above example).

Another area to look is the Background Intelligent Transfer Service, or "BITS".  SecureWorks has a very interesting blog post that illustrates one way that this native Windows service has been abused.  I highly suggest that if you're doing any kind of DFIR or threat hunting work, you do a good, solid read of the SecureWorks blog post.

I am not aware of any publicly-available tools for parsing the BITS qmgr0.dat or qmgr1.dat files, but you can use 'strings' to locate information of interest, and then use a hex editor from that point in order to get more specific information about the type of transfer (upload, download) that may have taken place, and it's status.  Also, be sure to keep an eye on those Windows Event Logs, as well.

Finding Bad
Jack Crook recently started a new blog, Finding Bad, covering DFIR and threat hunting topics.  As

Jack's most recent post on hunting for lateral movement is a good start, but IMHO, the difference in artifacts on the source vs the destination system during lateral movement needs to be clearly delineated.  Yeah, I may be pedantic, but from my perspective, there is actually a pretty huge difference, and that difference needs to be understood, for no other reason that because the artifacts on each system are different.

Adam recently posted a spreadsheet of various endpoint solutions that are's interesting to see the comparison.  Having detailed knowledge of one of the listed solutions does a level set with respect to my expectations regarding the others.

MAC Addresses in the Registry
I recently received a question from a friend regarding MAC addresses being stored in the Registry.  It turns out, there are places where the MAC address of a system is "stored" in the Registry, just not in the way you might think.  For example, running the RegRipper plugin, we see (at the bottom of the output) something like this:

Unique MAC Addresses:

I should also point out that the plugin, which is about 8 yrs old at this point, also might provide some information.

Registry Findings - 2012
The MAC Daddy - circa 2007

I ran across EventMonkey (wiki here) recently, which is a Python-based event processing utility.  What does that mean?  Well, from the wiki, that means "A multiprocessing utility that processes Windows event logs and stores into SQLite database with an option to send records to Elastic for indexing."  Cool.

This definitely seems like an interesting tool for DFIR analysts.  Something else that the tool reportedly does is process the JSON output from Willi's EVTXtract.

As I've mentioned before, later this month I'll be presenting at ArchC0N, discussing some of the misconceptions of ransomware.  I ran across an interesting blog post recently regarding, Fixing the Culture of Infosec Presentations.  I can't say that I fully agree with the concept behind the post, nor with the contents of blog post.  IMHO, the identified "culture" is from too narrow a sample of's unclear as to which "infosec conferences" this discussion applies.

I will say this...I stopped attending some conferences a while back because of the nature of how they were populated with speakers.  Big-named, headliner speakers were simply given a time slot, and in some cases, did not even prepare a talk.  I remember one such speaker using their time slot to talk about wicca.

At one point in the blog post, the author refers to a particular presenter who simply reads an essay that they've written; the author goes on to say that they prefer that.  What's the point?  If you write an essay, and it's available online, why spend your time reading it to the audience, when the audience is (or should be) fully capable of reading it themselves?

There's another statement made in the blog post that I wanted to comment on...

We have a force field up that only allows like .1% of our community to get on the stage, and that’s hurting all of us. It’s hurting the people who are too afraid to present. It’s hurting the conference attendees. And it’s hurting the conferences themselves because they’re only seeing a fraction of the great content that’s out there.

I completely agree that we're missing a lot of great content, but I do not agree that "we have put up a force field"; or, perhaps the way to look at it is that the force field is self-inflicted.  I have seen some really good presentations out there, and the one thing that they all have in common is that the speaker is comfortable with public speaking.  I say "self-inflicted" because there are also a lot of people in this field who are not only afraid to create a presentation and speak, they're also afraid to ask questions, or offer their own opinion on a topic.

What I've tried to do when presenting is to interact with the audience, to solicit their input and engage them.  After all, being the speaker doesn't mean that I know everything...I can't possibly know or have seen as much as all of us put together.  Rather than assume the preconceptions of the audience, why not ask them?  Admittedly, I will sometimes ask a question, and the only sound in the room is me breathing into the mic...but after a few minutes folks tend to start loosing up a bit, and in many cases,  a pretty good interactive discussion ensues.  In the end, we all walk away with something.

I also do not believe that "infosec presentations" need to be limited to the rather narrow spectrum described in the blog post.  Attack, defend, or a tool for doing either one.  There is so much more than that out there and available.  How about something new that you've seen (okay, maybe that would part of "defend"), or a new way of looking at something you've seen?  Want a good example?  Take a look at Kevin Strickland's blog post on the Evolution of the Samas Ransomware.  At the point where he wrote that blog post, SecureWorks (full disclosure, Kevin and I are both employed by SecureWorks) had seen several Samas ransomware cases; Kevin chose to look at what we'd all seen from a different perspective.

There are conferences that have already gone to taking at least some of the advice in the blog post. For example, the last time I attended a SANS360 presentation, there were 10 speakers, each with 6 minutes to present.  Some timed their presentations down to the second, while others seemed to completely ignore the 6 minute limit on presentations.  Even so, it was great to see a speaker literally remove all of the fluff and focus on one specific topic, and get that across.

Sunday, July 17, 2016


Book Update
I recently and quite literally stumbled across an image online that was provided as part of a challenge, involving a compromised system.  I contacted the author, as well as discussed the idea of updating the book with my tech editor, and the result of both conversations is that I will be extending the book with an additional chapter.  This image is from a Windows 2008 web server that had been "compromised", and I thought it would be great to add this to the book, given the types of "cases" that were already being covered in the book.

RDP Bitmap Cache Parser
I ran across this French web site recently, and after having Google translate it for me, got a bit better view into what went into the RDP bitmap cache parsing tool. This is a pretty fascinating idea; I know that I've run across a number of cases involving the use of RDP, either by an intruder or a malicious insider, and there could have been valuable clues left behind in the bitmap cache file.

Pancake Viewer
I learned from David Dym recently that Matt (works with David Cowen) is working on something called the "pancake viewer", which is as DFVFS-backed image viewer using wxPython for the GUI.  This looks like a fascinating project, and something that will likely have considerable use, particularly as the "future functionality" is added.

Web Shells
Web shells are nothing new; here is a Security Disclosures blog post regarding web shells from 3 years ago.  What's "new" is what is has recently been shared on the topic of web shells.

This DFIR.IT blog post provides some references (at the bottom of the post) where other firms have discussed the use of web shells, and this post on the SecureWorks site provides insight as to how a web shell was used as a mechanism to deploy ransomware.

This DFIR.IT blog post (#3 in the series) provides a review of tools used to detect web shells.

A useful resource for Yara rules used to detect web shells includes this one by 1aNOrmus.

Wednesday, July 06, 2016

Updates and Stuff

Registry Analysis
I received a question recently, asking if it were possible to manipulate Registry key LastWrite time stamps in a manner similar to file system time stomping.  Page 30 of Windows Registry Forensics addresses this; there's a Warning sidebar on that page that refers to SetRegTime, which directly answers this question.

The second part of the question was asking if I'd ever seen this in the wild, and to this point, I haven't.  Then I thought to myself, how would I identify or confirm this?   I think one way would be to make use of historical data within the acquired image; let's say that the key in question is available within Software hive, with a LastWrite time of 6 months prior to the image being acquired.  I'd start by examining the Software hive within the RegBack folder, as well as the Software hives within any available VSCs.  So, if the key has a LastWrite time of 6 months ago, and the Software hive from the RegBack folder was created 4 days ago and does NOT include any indication of the key existing, then you might have an issue where the key was created and time stomped.

Powershell Logging
I personally don't use Powershell (much, yet), but there have been times when I've been investigating a Windows system and found some entries in the Powershell Event Log that do little more than tell me that something happened.  While conducting analysis, I have no idea if the use of Powershell is pervasive throughout the infrastructure, if it's something one or two admins prefer to use, or if it was used by an attacker...the limited information available by default doesn't give me much of a view into what happened on the system.

The good news is that we can fix this; FireEye provides some valuable instructions for enabling a much greater level of visibility within the Windows Event Log regarding activity that may have occurred via Powershell.  This Powershell Logging Cheat Sheet provides an easy reference for enabling a lot of FireEye's recommendations.

So, what does this mean to an incident responder?

Well, if you're investigating Powershell-based attacks, it can be extremely valuable, IF Powershell has been updated and the logging configured. If you're a responder in an FTE position, then definitely push for the appropriate updates to systems, be it upgrading the Powershell installed, or upgrading the systems themselves.  When dealing with an attack, visibility is don't want the attacker moving through an obvious blind spot.

This is also really valuable for testing purposes; set up your VM, upgrade the Powershell and configure logging, and then go about testing various attacks; the FireEye instructions mentioned above refer to Mimikatz.

Powershell PSReadLine Module
One of the things that's "new" to Windows 10 is the incorporation of the PSReadLine module into Powershell v5.0, which provides a configurable command history (similar to a bash_history file).  Once everything's up and running correctly, the history file is found in the user profile at C:\Users\user\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt.

For versions of Windows prior to 10, you can update Powershell and install the module.

This is something else I'd definitely upgrade, configure and enable in a testing environment.

CLI Tools
As someone who's written a number of CLI tools in Perl, I found this presentation by Brad Lhotsky to be pretty interesting.

Tool Listing
Speaking of tools, I recently ran across this listing of tools, and what caught my attention wasn't the listing itself as much as the format.  I like the Windows Explorer-style listing that lets you click to see what tools are listed...pretty slick.

I was doing some preparation for a talking I'm giving at ArchC0N in August, and was checking out the MMPC site.  The first thing I noticed was that the listing on the right-hand side of the page had 10 new malware families identified, 6 of which were ransomware.

My talk at ArchC0N is on the misconceptions of ransomware, something we saw early on in Feb and March of this year, but continue to see even now.  In an attempt to get something interesting out on the topic, a number of media sites were either adding content to their articles that had little to do with what was actually happening at a site infected with ransomware, or they were simply parroting what someone else had said.  This has let to a great deal of confusion as infection vectors for ransomware in general, as well as what occurred during specific incidents.

More on the Registry...
With respect to the Registry, there're more misconceptions that continue unabated, particularly from Microsoft.  For example, take a look at the write-up for the ThreatFin ransomware, which states:

It can create the following registry entries to ensure it runs each time you start your PC:

The write-up then goes on to identify values added to the Run key within the HKCU hive.  However, this doesn't cause the malware to run each time the system is started, but rather when that user logs into the system.