Friday, January 19, 2018


What's old is new
Some discussions I've been part of (IRL and online, recently and a while ago), have been about what it takes to get started in the DFIR field, and one of the activities I've recommended and pushed, over certifications, is running one's own experiments and blogging about the findings and experience. The immediate push-back from many on that topic is often with respect to content, and my response is that I'm not looking for something new and innovative, I'm more interested in how well you express yourself.

That's right, things we blog about don't have to be new or innovative.  Not long ago, Richard Davis tweeted that he'd put together a video explaining shellbag forensics.  And that's's a good thing that we're talking about these things.  Some of these topics...shellbags, ShimCache, etc...are poorly understood and need to be discussed regularly, because not everyone who needs to know this stuff is going to see it when they need it.  I have blog posts going back almost seven and a half years on the topic of shellbags that are still valid today.  Taking that even deeper, I also have blog posts going back almost as far about shell items, the data blobs that make up shellbags, LNK files (and by extension, JumpLists), as well as provide the building blocks of a number of other important evidentary items found in the Windows Registry, such as RecentDocs and ComDlg32 values.

My point is that DFIR is a growing field, and many of the available pipelines into the industry don't provide complete enough coverage of a lot of topics.  As such, it's incumbent upon analysts to keep up on things themselves, something that can be done through mentoring and self-exploration.  A great way to get "into" the industry is to pick a topic or area, and start blogging your own experience and findings.  Develop your ability to communicate in a clear and concise manner.

It's NOT just for the military
Over the two decades that I've been working in the cybersecurity field, there've been a great many times where I've seen or heard something that has sparked a memory from my time on active duty.  When I was fresh out of the military, I initially found a lot of folks, particularly in the private sector who were very reticent to hear about anything that had to do with the military.  If I, or someone else, started a conversation with, " the military...", the folks on the other side of the table would cut us off and state, "...this isn't the military, that won't work here."

However, over time, I began to see that not only would what we were talking about definitely work, but some times, folks would talk about "military things" as if they were doing it, but it wasn't being applied.  Not at all.  It was just something they were saying to sound cool.

"Defense-in-depth" is something near and dear to my heart, because throughout my time in the military, it was something that was on the forefront of my mind from pretty much the first day of training.  Regardless of location...terrain model or the woods of Quantico...or the size of the unit...squad, platoon, company...we were always pushed to consider things like channelization and defense-in-depth.  We were pushed to recognize and use the terrain, and what we had available.  The basic idea was to have layers to the defense that slowed down, stopped, or drove the enemy in the direction you wanted them to go.

The same thing can be applied to a network infrastructure.  Reduce your attack surface by making sure of things like, that DNS server is only providing DNS services, not RDP and a web server, as well.  Don't make it easy for the bad guy, and don't leave "low hanging fruit" laying around in easy reach. 

A great deal of what the military does in the real world can be easily transitioned to the cyber world, and over the years that I've been working in this space, I have seen/heard folks say that "defense-in-depth has failed"...yet, I've never seen it actually employed.  Things like the use of two-factor authentication, segmentation, and role-based access can make it such that a bad guy is going to be really noisy in their attempts to compromise your put something in place that will "hear" them (i.e., EDR, monitoring). 

Not a military example, but did you see the first "Mission: Impossible" movie?  Remember the scene where Tom Cruise's character made it back to the safe house, and when he got to the top of the stairs, took the light bulb out of the socket and crushed it in his jacket?  He then spread the shards out on the now-darkened hallway floor, as he backed toward his room.  This is a really good example for network defense, as he made the one pathway to the room more difficult to navigate, particularly in a stealthy manner.  If you kept watching, you'll see that he was awakened by someone stepping on a shard from the broken light bulb, alerting him to their presence.

Compartmentalization and segmentation are other things that security pros talk about often; if someone from HR has no need whatsoever to access information in, say, engineering or finance, there should be controls in place, but more importantly, why should they be able to access it at all?  I've seen a lot of what I call "bolt-on M&As", where a merger and acquisition takes place and fat pipes with no controls are used to connect the two organizations.  What was once two small, flat networks is now one big, flat network, where someone in marketing from company A can access all of the manufacturing docs in company B.

The US Navy understands compartmentalization very well; this is why the bulkheads on Navy ships go all the way to the ceiling.  In the case of a catastrophic failure, where flooding occurs, sections of the ship can be shut off from access to others.  Consider the fate of the USS Cole versus that of the Titanic. 'Nuff said!

Sometimes, the military examples strike too close to home.  I've been reading Ben MacIntyre's Rogue Heroes, a history of the British SAS.  In the book, the author describes the preparation for a raid on the port of Benghazi, and that while practicing for the raid in a British-held port, a sentry noticed some suspicious activity, to which he was informed, in quite colorful language, to mind his own business.  And he did.  According to the author, this was later repeated at the target port on the night of the raid...a sentry aboard a ship noticed something going on and inquired, only to be informed (again, in very colorful language) that he should mind his own business.  I've seen a number of incidents where this very example has fact, I've seen it many times, particularly during targeted adversary investigations.  During one particular investigation, while examining several systems, I noticed activity indicative of an admin logging into the system (during regular work hours, and from the console), and 'seeing' the adversary's RAT on the system and removing it. Okay, I get that the admin might not be familiar with the RAT and would just remove it from a system, but when they do the same thing on a second system, and then failed to inform anyone of what they'd seen or done, there's no difference between those actions, and what the SAS troopers had encountered in the African desert.

I recently ran across this WPScans blog post, which discusses finding PHP and Wordpress "backdoors" using a number of methods.  I took the opportunity to download the archive linked at the end of the blog post, and ran a Yara rule file I've been maintaining across it, and got some interesting hits.

The Yara rule file I used started out as a collection of rules pulled in part from various rules found online (DarkenCode, Thor, etc.), but over time I have added rules (or modified existing ones) based on web shells I've seen on IR engagements, as well as shells others have seen and shared with me.  For those who've shared web shells with me, I've shared either rules or snippets of what could be included in rules back with them.

So, the moral of the story is that when finishing up a DFIR engagement, look for those things from that engagement that you can "bake back into" your tools (RegRipper, Yara rules, EDR filters, etc.) and analysis processes.  This is particularly valuable if you're working as part of a team, because the entire team benefits from the experience of one analyst.

Additional Resources:
DFIR.IT - Webshells

I made some updates recently to a couple of tools...

I got some interesting information from a RegRipper user who'd had an issue with the plugin on Windows 10.  Thanks to their providing sample data to work with, I was (finally) able to dig into the data and figure out how to address the issue in the code.

I also updated the and plugins, based on input from a user.  It's funny, because at least one of those plugins hadn't been updated in a decade.

I added an additional event mapping to the eventmap.txt file, not due to any new artifacts I'd seen, but as a result of some research I'd done into errors generated by the TaskScheduler service, particularly as they related to backward compatibility.

All updates were sync'd with their respective repositories.

Saturday, January 06, 2018

WindowsIR 2018: First Steps

I haven't mentioned ransomware in a while, but I ran across something that really stood out to me, in part because we had a couple of things we don't usually see in the cybersecurity arena through the media...numbers. 

Okay, first, there's this article from GovTech that describes an issue with ransomware encountered by Erie County Medical Center in April, 2017.  Apparently, in the wee hours of the morning, ransom notes started appearing on computer screens, demanding the equivalent (at the time) of $30,000USD to decrypt the files.  The hospital opted to not pay the ransom, and it reportedly took 6 weeks to recover, with a loss of $10MUSD...yes, the article said, "MILLION". 

A couple of other things from the article...the hospital reportedly took preparatory steps and "upgraded" their cybersecurity (not sure what that means, really...), and increased their insurance from $2MUSD to $10MUSD. Also, the hospital claimed that it's security was now rather "advanced", to the point where the CEO of the hospital stated, “Our cybersecurity team said they would have rated us above-average before the attack.”

So, here we have our numbers, and the fact that the hospital took a look at their risks and tried to cover what they could.  I thought I'd take a bit of a look around and see what else I could find about this situation, and I ran across this Barkly article, that includes a timeline of the incident itself.  The first two bullets of that timeline are:

Roughly a week before the ransom notes appear attackers gain access to one of ECMC's servers after scanning the Internet for potential victims with port 3389 open and Remote Desktop Protocol (RDP) exposed. They then brute-force the RDP connection thanks to "a relatively easy default password."

Once inside, attackers explore the network, and potentially use Windows command-line utility PsExec to manually deploy SamSam ransomware on an undisclosed number of machines...

Ah, yes...readers of this blog know a thing or two about Samsam, or Samas, don't you?  There's this blog post that I authored, and there's this one by The Great Kevin Strickland.

Anyway, I continued looking to see if I could find any other article that included some sort of definitive information that would collaborate the Barkly article, and while I did find several articles that referred to the firm that responded to and assisted ECMC in getting back up and running, I had trouble finding anything else that specifically supported the Barkly document.  This article states that the ransom note included 'hot pink text'...some of the variants of Samas ransomware that I looked at earlier this year included the ransom note HTML in the executable itself, and used a misspelled font color...the HTML tag read "DrakRed".  I did, however, find this article that seemed to collaborate the Barkly article.

So, a couple of lessons we can learn from this...

1.  A vulnerable web or RDP server just hanging out there on the Internet, waiting to be plucked is not "above-average" cybersecurity.

2.  An EDR solution would have paid huge dividends; the solution gets programmed into the budget cycle, and can lead to much earlier detection (or prevention) of the sort of thing.  Not only can an EDR solution detect the adversary's activities before they get to the point of deploying the ransomware, but it can also prevent processes from running, such as if the WinWord.exe process (MS Word) attempts to launch something else, such as a command prompt, run Powershell, run rundll32.exe, etc. 

If you still don't think ransomware is an issue, consider what happened to Spring Hill, TN, and Mecklenberg County, NC.  The CEO of the Fortalice Solutions (the cybersecurity firm that responded to assist Mecklenberg Cty), Theresa Payton, discussed budget issues regarding IR in this CSOOnline article; incorporating early detection into the IR plan, and by extension, the budget, will end up saving organizations from the direct and indirect costs associated with incidents.

EDR Solutions
Something to keep in mind is that your EDR solution is only as good as those who maintain it.  For example, EDR can be powerful and provide a great deal of capabilities, but it has to be monitored and maintained.  Most, if not all, EDR solutions provide the capability to add new detections, so you want to make sure that those detections are being kept up to date.  For example, detecting "weaponized" or "malicious" Word documents...those email attachments that, when opened, will launch something else...should be part of the package.  Then when something new comes along, such as the MS Word "subdoc" functionality (functionality is a 'feature' that can be abused...), you would want your EDR solution updated with the ability to detect that, as well as your infrastructure searched to determine if there are any indications of someone having already abused this feature.

RegRipper Plugin Update
The morning of 1 Jan 2018, I read a recent blog post by Adam/Hexacorn regarding an alternative means that executable images files can use to load DLLs, and given my experience with targeted threat hunting and the pervasiveness I've seen of things like DLL side loading, and adversaries taking advantage of the DLL search order for loading malicious DLLs (loaded by innocuous and often well-known EXEs), I figured it was a good bit of kit to have I wrote a RegRipper plugin.  It took all of 10 min to write and test, mostly because I used an already-existing plugin (something Corey Harrell had mentioned a while back in his blog) as the basis for the new one.  I then uploaded the completed plugin to the GitHub repository.

Sunday, December 31, 2017

WindowsIR 2017

I thought that for my final post of 2017, I'd pull together some loose ends and tie off some threads from throughout the year, and to do that, I figured I'd just go through the draft blog posts I have sitting around, pull out the content, and put it all into one final blog post for the year.

Investigating Windows Systems
Writing for my next book, Investigating Windows Systems, is going well.  I've got one more chapter to get completed and in to my tech reviewer, and the final manuscript is due in April, 2018.  The writing is coming along really well, particularly given everything that's gone on this year.

IWS is a departure from my previous books, in that instead of walking through artifacts one after another, this book walks through the process of analysis, using currently available CTF and forensic challenges posted online, and specifically calling out analysis decisions along the way.  For example, at one point in the analysis, why did I opt to take this action, rather than that one?  Why did I choose to pivot on this IP address, rather than run a scan?

IWS is not a book about the basics.  From the beginning, I start from the point that whomever is reading the book knows about the MFT, understands what a "timeline" is, etc.  Images and analysis scenarios are somewhat limited, given that I'm using what's already available online, but the available images do range from Windows XP, through Windows 10.  In several instances in the book, I mention creating a timeline, but do not include the exact process in the text of the book, for a couple of reasons.  One is that I've already covered the process.  The other is that this is the process I use; I don't require anyone to use the exact same process, step by step.  I will include information such as the list of commands used in online resources in support of the book, but the reader can create a timeline using any method of their choosing.  Or not.

Why Windows XP?  Well, this past summer (July, 2017), I was assisting with some analysis work for NotPetya, and WinXP and 2003 systems were involved.  Also, the book is about the analysis process, not about tools.  My goal is, in part, to illustrate the need for analysts to choose a particular tool based their understanding of the image(s) being analyzed and the goals of the analysis.  Too many times, I've heard, "...we aren't finished because vendor tool X kept crashing, and we kept restarting it...", without that same analyst having a reasoned decision behind running that tool in the first place.

Building a Personal Brand in InfoSec
I've blogged a couple of times this year (here, and here) on the topic of "getting started" in the cyber security field.  In some cases, thoughts along these lines are initiated by a series of questions in online forums, or some other event.  I recently read this AlienVault blog post on the topic, and I thought I'd share my thoughts on the topic, with respect to the contents of the article itself.  The author makes some very good points, and I saw an opportunity to tie their comments in with my own.

- Blogging is a good way to showcase your knowledge
Most definitely.  Not only that, a blog post is a great way to showcase your ability to write a coherent sentence.  Sound harsh?  Maybe it is.  I've been in the infosec/cybersecurity industry for about 20 yrs, and my blog goes back 13+ yrs.  This isn't me bragging, just stating a fact. When I look at a blog article from someone, to the industry, been in the industry for a while, or returning to blogging after being away for a while...the first thing that stands out in my mind isn't the content as much as it the author's ability to communicate.  This is something that is not unique to me; I've read about this in books such as "The Articulate Executive in Action", and it's all about that initial, emotional, visceral response.

There are a LOT of things you can use as a basis for writing a blog post.  The first step is understanding that you do not have to come up with original or innovative content.  Not at all.  This is probably the single most difficult obstacle to blogging for most folks.  From my experience working on several consulting teams over two decades, it's the single most used excuse for not sharing information amongst the team itself, and I've also seen/heard it used as a reason for not blogging.

You can take a new (to you) look at a topic that's been discussed.  One of the biggest misconceptions (or maybe the biggest excuse) for folks is that they have nothing to share that's of any significance or value, and that simply is not the case at all.  Even if something you may write about has been seen before, the simple fact that others are still seeing it, or others are seeing it again after an apparent absence, is pretty significant.  Maybe others aren't seeing it due to some change in the delivery mechanism; for example, early in 2016, there was a lot of talk about ransomware, and most of the media articles indicated that ransomware is email-borne.  What if you see some ransomware that was, in fact, email-borne but bypassed protection/detection mechanisms due to some variation in the delivery?  Blah blah ransomware blah blah email blah user clicked on it blah.  Whether you know it or not, there's a significant story there.  Or, maybe I should say, whether you're willing to admit it to yourself or not, there's a significant story there.

Don't feel that you can create your own content?  You can write book reviews, or reviews of individual chapters of books.  I would caution you, however...a "book review" that consists of "...chapter 1 covers blah, chapter 2 covers blah..." is NOT a review at all.  It's the table of contents.  Even when reviewing a single chapter, there's so much to be said...what was it that the author said, or was it how they said it?  How did what you read in the pages impact you, or your daily SOP?  Too many times I've seen reviews consist of little more than a table of contents, and not be abundantly helpful.

- Conferences are an absolute goldmine for knowledge...
They definitely are, and the author makes a point of discussing what can be the real value of conferences, which is the networking aspect. 

A long time ago, I found that I was attending conference presentations, and found that it felt like the first day of high school.  I'd confidently walk into a room, take my seat, and within a few minutes of the presenter starting to speak, start to wonder if I was in the correct room.  It often turned out that either the presentation title was a place holder, or that I had completely misinterpreted the content from the 5 words used in the title.  Okay, shame on me.  Often times, however, presenters don't dig into the detail that I would've hoped for; on the other hand, when I've presented on what something look like (i.e., lateral movement, etc.), I've received a great deal of feedback.  When I've seen such presentations, I've had a great deal of feedback (and questions) myself.  Often a lot of the detail...what did you see, what did it look like, why did you decide to go left instead of right...comes from the networking aspect of conferences.

- Certifications are a hot topic in the tech industry, and they are HR’s best friend for screening applicants.
True, very true.  But this goes back to the blogging discussed above...I'm not so much interested in the certifications one has, as much as I'm interested in how the knowledge and skills are used and applied.  Case in point, I once worked with someone (several years ago) who went off to a week-long training course in digital forensics, and within 6 weeks of the end of the course, wrote a report to a client in which they demonstrated their lack of understanding of the MFT.  In a report.  To a client.

So, yes, certifications are viewed as providing an objective measure of an applicant's potential abilities, but the most important question to ask is, when someone is sent off to training, does anyone hold them accountable for what they learned?  When they come back from the training, do they have new tools, processes, and techniques that they then employ, and you can see the result of this in their work?  Or, is there little difference between the "before" and "after"?

On Writing
I mentioned above that I'm working on completing a book, and over the last couple of weeks/months, I've run across some excellent advice for folks who want to write books that cover 'cyber' topic.  Ed Amoroso shared some great Advice For Cyber Authors, and Brett Shavers shared a tip for those thinking of writing a DFIR book.

Fast Times at DFIR High
One of the draft posts I'd started earlier this year had been one in which I would recount the "funny" things that I'd seen go on over the past 20 years in infosec (what we used to call "cybersecurity"), and specifically within DFIR consulting work.  I'm sure that a great deal of what I could recount would ring true for many, and that many would also have their own stories, or their own twists on similar stories.  I'd had vignettes in the draft written down, things like a client calling late at night to declare an emergency and demand that someone be sent on-site before they provided any information about the incident at all, only to have someone show up and be told that they could go home (after arriving at the airport and traveling on-site, of course). Or things like a client sending password-enabled archives containing "information you'll want", but no other description...requiring that they all be opened and examined, questions asked as to their context, etc.

I opted to not continue with the blog article, and I deleted it, because it occurred to me that it wasn't quite as humorous as I would have hoped.  The simple fact is that the types of actions I experienced in, say, 2000, are still being seen by active IR consultants today.  I know because I've talked to some IR folks who've experienced them.  Further, it really came down to a "...cast the first stone..." issue; who are we (DFIR folks, responders, etc.) to chide clients for their actions (how inappropriate or misguided that we see them...) when our own actions are not pristine?  Largely, as a community (and I'm not including specific individuals in this...), we don't share experiences, general findings, etc., with each other.  Some do, most don't...and life goes on.

Final Thought for 2017
EDR. Endpoint visibility is going to become an even more important issue in the coming year.  Yes, I've refrained from making predictions, and I'm not making one right now...I'm simply saying that the past decade has shown the power of early detection, and the differences in outcomes when an organization understands that a breach has likely already occurred, and actively prepares for it.  Just this year, we've seen a number of very significant breaches make it into the mainstream media, with several others being mentioned but not making, say, the national news.  Even more (many, many more) breaches occur without ever being publicly announced in any manner.

Equifax is one of the breaches that we (apparently) know the most about, and given what's appeared in the public eye, EDR would've played a significant role in avoiding the issue overall.  Given what's been shared publicly, someone monitoring the systems would've seen something suspicious launched as a child process of the web server (such filters or alerts should be common place) and been able to initiate investigative actions immediately.

EDR gives visibility into endpoints, and yes, there is a great deal of data headed your way; however, there are ways to manage this volume of data.  Employ threat intelligence to alert you to those actions and events that require your attention, and have a plan in place for how to quickly investigate and address them.  The military refers to this as "immediate actions", things that you learn in a "safe" environment and that you practice in adverse environments, so that you can effectively employ them when required.  Ensure that you cover blind spots; not just OS variants, but also, what happens when the adversary moves from command line access (via a Trojan or web shell) to GUI shell (Windows Explorer) access?  If your EDR solution stops providing visibility when the adversary logs in via RDP and opens any graphical application on the desktop, you have a blind spot.

Effective use of EDR solutions can significantly reduce costs associated with the current "state of the breach"; that is, getting notified by an external third party that you've suffered a breach, weeks or months after the initial compromise occurred.  With more immediate detection and response, your requirement to notify (based on state laws, regulatory bodies, etc.) will be obviated.  Which would you rather be...the next Equifax, or, because you budgeted for a solution, you can go to regulatory bodies and state, yes, we were breached, but no critical/sensitive data was accessed?

Sunday, December 24, 2017

"Sophisticated" Attacks

I ran across an interesting article recently, which described a "highly sophisticated" email scam, but didn't go into a great deal of detail (or any level of detail, for that matter) as to how the scam was "sophisticated".

To be honest, I'm curious...not morbidly so, but more in an intellectual sense.  What is "sophisticated"?  I've been in meetings where attacks were described as sophisticated, and then conducted the actual incident response or root cause investigation, and found out that the attack was perpetrated via a system running the IIS web server and Terminal Services.  Web shells were found in the web content folder, and the Windows Event Logs showed lengthy history of failed login attempts via RDP.

Very often what is said to be "sophisticated" is often framed as such due to a dearth of actual data, which occurred for a number of reasons, most of which are traced back to a simple lack of preparedness on the part of the organization.  For example, were any attempts made to prevent the attack from succeeding?  I've seen a DNS server, per the client's description, running IIS and Terminal Services, with the web server logs chock full of scan attempts, and the Windows Event Logs full of failed login attempts...yet nothing was done, such as to say, "yeah, hey, this is a DNS server, we shouldn't be running these other services...", etc.  The system lacked any sort of configuration modifications "out of the box" to allow for things like increased logging, increased detail in logging, or instrumentation to provide visibility.  As such, in investigating the system, there was scant little we could determine with respect to things like lateral movement, or other "actions on the objective".

At this point, my report is shaping up to include words such as "dearth of available data due to a lack of instrumentation and visibility...", where the messaging at the client level is "this was a highly sophisticated attack". 

I've also seen ransomware attacks described as "highly sophisticated", as they apparently bypassed email protections.  In such cases, the data that illustrated the method of access and propagation of the ransomware itself was available, but apparently, it's easier to just say that the attack was "highly sophisticated", and leave it at that.  After all, who's really going to ask for anything beyond that?

After reading that first article, I then ran a quick Google search, and two of the six hits on the first page, without scrolling down, included the term "myth" in the URL title.  In fact, this Security Magazine article from April 2017 is more inline with my own experience in two decades of cybersecurity consulting.  Several years ago, I was working with a client and during the first few minutes of the first meeting, one of the IT staff made a point of emphasizing the fact that they did NOT use communal admin accounts.  I noted this, but also noted the emphasis...because later in the investigation, we found that the adversary had left a little "treat".  At one point, the adversary had pushed out RAT installers to less than a dozens systems via at.exe, and about a week later, pushed out a close variant of that same RAT installer to one other system; however, this installer had a different C2 configuration, so the low-level indicators (C2 IP address, file hash) that the client was looking for were obviated.  This installer was pushed out to the StartUp folder for the...wait for it..."admin" account.  Yes, that is the name of the account..."admin".  It turned out that this was likely pushed out in case the other RATs were discovered, and would provide a means of access back into the infrastructure at a later date.  After all, the "admin" profile already existed on a number of systems, meaning that the account had been set up in the domain, and then used to log into several systems.  The RAT installer was pushed out to one system, in the "admin" profile's StartUp folder, as a means of providing the adversary with access back into the infrastructure, in the advent of IR activities. 

As in the above described instance, a great many of the incidents I (and others) have responded to are not terribly sophisticated at all.  In fact, the simple elegance of some of these incidents are impressive, given the fact that the adversary knows more about the function of, say, Windows networking, beyond what you achieve through the GUI. 

For example, if an adversary modifies the "hosts" file on a Windows system, is that "highly sophisticated"?  Here is a MS article on host name resolution order; it was updated just shy of a year ago (as of this writing), but take a close look at the OS versions reference in the article.  So, are you not seeing suspicious domain queries in your DNS logs because there is no malware on your infrastructure, or is it because someone took advantage of Microsoft capabilities that go back over 20 yrs? 

Speaking of archaic capabilities, anyone remember this one?

My point is, what is truly a "highly sophisticated" attack?

Monday, October 30, 2017


Case Studies
Brett Shavers posted a fascinating article recently, in which discussed the value of case studies throughout his career, going back to his LEO days. I've thought for some time now that this is one of the great missed opportunities of the DFIR community, that more analysts don't share what they've seen or done.  At the same time, I've seen time and again the benefit that comes from doing something like this...we get a different perspective, or we learn a little bit more or new, *OR* we learn that we were incorrect about something, and find out what the real deal is...this can have a huge impact on our future work.  Throughout the time I've been involved in the community, I've heard "...but I don't want to be corrected..." more than once, and to be honest, I'm not at all sure why that is.  God knows that I don't know everything and if I've gotten something wrong, I'd love to know what it is so that I don't keep making the same mistake over and over again.

Taking Brett's blog post a bit further (whether he knew it or not), Phill Moore shared a post about documenting his work.

Cisco Talos Intelligence - Decoy Documents
The Cisco Talos Intel team recently blogged regarding the use of decoy documents during a real cyber conflict.  What I found fascinating about this write-up was the different approach that was taken with the documents; specifically, the payload is base64-encoded and spread across the document metadata.  When the document is opened, a macro reportedly extracts the segments from the metadata variables and concatenates them together, resulting in a base64-encoded PE file.  The folks at Talos were kind enough to provide hashes for three decoy documents, which were all available on VirusTotal (searching for some of the metadata items returned this Hybrid Analysis link).  As such, I took the opportunity to run them through my own tools (, to see what I could see.

The first interesting aspect of the documents is that even though the three documents were different in structure, some of the metadata (including time stamps) were identical.

For example, from, the three documents all contained the same information illustrated below:

Authress     : Rafael Moon
LastAuth     : Nick Daemoji
RevNum       : 7
AppName      : Microsoft Office Word
Created      : 03.10.2017, 01:36:00
Last Saved   : 04.10.2017, 14:20:00

From, even though the first document had markedly different streams from the other two, the root entry and directory entries all had the same time stamp and CLSID:

Root Entry  Date: 04.10.2017, 14:20:11  CLSID: 00020906-0000-0000-C000-000000000046

So, given the combination of identical metadata and time stamps, it's possible that the values were modified or manipulated to their values.  I'd say that this is support by the fact that some of the metadata values were modified to include the

Remember, you can also use Didier Stevens' to extract the compressed VBA macros.  For example, to view the complete VBA script from the second document, I just typed the following command: d:\cases\maldoc\apt28_2 -s 8 -v

Shoop*, there it is...the decompressed macro script.  Very nice.  Maybe Cory and I should do reprise DFwOST, and do a second edition, and include stuff like this, going beyond just tools for forensics, and looking at tools that allow you to parse documents, etc.

Something I've mentioned before are some values that I saw listed in the output of 'strings' run across the documents:


Again, I've seen these before in Office documents that contain macros, most times with different values.  The definition of these values can be found at the MS-OVA Project page, under the Stream example, and I haven't yet been able to determine how these fields are populated.

LNK Files
I've posted several times over the past couple of months regarding parsing metadata from LNK files that are part of an adversary's toolkit; those either sent to a target as an email attachment, or embedded in another document format, etc.  US-CERT posted an alert recently that I found to be very interesting along the same lines, particularly regarding the fact that the adversary modifying Windows shortcut files as a means of credential theft. The big take-away from this information (for me) is that the adversary incorporated something that rudimentary about how the operating system functions and used it as a means for collecting credentials.

Of course, the "elephant in the room" is, why are organizations still allowing out-bound SMB traffic?

USB Devices
There's been a good bit of activity online recently regarding USB devices and Windows 10 systems.  Eric Zimmerman blogged about some changes to the AmCache.hve file that include a significant amount of information about devices.  Matt Graeber tweeted that "Microsoft-Windows-Partition/Diagnostic EID 1006 gives you a raw dump of the partition table, MBR, and VBR upon drive insertion", and @Requiem_fr tweeted that USB device info can be found in the Microsoft-Windows-Kernel-PnP%4Configuration.evtx Windows Event Log.

A little over 12 years ago, Cory Altheide and I published a paper on tracking USB devices on Windows XP systems, and it's been pretty amazing to see not only how this sort of thing has grown over time, but also to see the number of artifacts that continue to be generated as new versions of Windows have been released.

*Please excuse my taking artistic liberties to work in a Deadpool reference...

Saturday, October 14, 2017


In preparing to do some testing in a Windows 7 VM, I decided to beef up PowerShell to ensure that artifacts are, in fact, created.  I wanted to make sure anything hinky that was done in PowerShell was recorded in some way.

The first step was to upgrade PowerShell to version 5.  I also found a couple of sites that recommended Registry settings to ensure the Module Logging and Script Block Logging were enabled, as well.

The idea behind this is that there have been a number of cases I've worked that have involved some sort of obfuscated PowerShell...Meterpreter, stuff loaded from the Registry, stuff that's come in via LNK files attached to emails (or embedded in email attachments), etc.  Heck, not just cases I've worked...look at social media on any given day and you're likely to see references to this sort of thing.  So, in an effort to help clients, one of the things I want to do is to go beyond just recommending "update your PowerShell" or "block PowerShell all together", and be able to show what the effect of updating PowerShell will likely be.

There's been a good bit of info floating around on Twitter this past week regarding the use of DDE in Office documents to launch malicious activity.  I first saw this mentioned via this NViso blog post, then I saw this NViso update (includes Yara rules), and anyone looking into this will usually find this SensePost blog article pretty quickly.  And don't think for a second that this is all there is...there's a great deal of discussion going on, and all you have to do is search for "dde" on Twitter to see most of it.

David Longenecker also posted an article on the DDE topic, as well.  Besides the technical component of his post, there's another aspect of David's write-up that may go unnoticed...look at the "Updated 11 October" section.  David could have quietly updated the information in the post, but instead went ahead and highlighted the fact that he'd made a mistake and then corrected it.

USB Devices
Matt Graeber recently tweeted about data he observed in the Microsoft-Windows-Partition/Diagnostic Windows Event Log, specifically events with ID 1006; he said, " you a raw dump of the partition table, MBR, and VBR upon drive insertion."  Looking at records from that log, in the Details view of Event Viewer, there are data items such as Capacity, Manufacturer, Model, SerialNumber, etc.  And yes, there's also raw data from the partition table, MBR, and VBR, as well.

So, if you need to know something about devices connected to a Windows 10 system, try parsing the data from this *.evtx file.  What you'll end up with is not only the devices, but when, and how often.

Eric Zimmerman recently tweeted about the RecentApps key in the NTUSER.DAT hive; once I took a look at the key contents, I was pretty sure I was looking at something not too different from the "old" UserAssist data...pretty cool stuff.  I also found via Twitter that Jason Hale had blogged about the key, as well.

So, I wrote a and a plugin, and uploaded them to the repository.  I only had one hive for testing, so YMMV.  I do like the TLN plugin...pushing that data into a timeline can be illuminating, I'm sure, for any case involving a Windows 10 system where the someone interacted with the Explorer shell.  In fact, creating a timeline using just the UserAssist and RecentApps information is pretty illuminating...using information from my own NTUSER.DAT hive file (extracted via FTK Imager), I see things like:

{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (3)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (0)
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE RecentItem: F:\ch5\notes.txt


{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe RecentItem: C:\Users\harlan\Desktop\speak.vbs
{1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (7)
[Program Execution] UserAssist - {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\WScript.exe (0)

Each of the above two listings of three entries from a timeline all occurred in the same second, and provide a good bit of insight into the activities of the user.  For example, this augments the information provided by the RecentDocs key, by providing the date and time at which files were accessed, rather than just that of the most recently accessed file.  Add to this timeline entries from the DestList stream from JumpLists, as well as entries from AmCache.hve, etc., and you have a wealth of data regarding program execution and file access artifacts for a user, particularly where detailed process tracking (or some similar mechanism, such as Sysmon) is not enabled.

Eric also posted recently about changes to the contents of the AmCache.hve file...that file has, so far, been a great source of artifacts, so I'll have to go dig into Eric's post and update my own parser.  From reading Eric's findings, it appears that there's been some information added regarding devices and device drivers, which can be very valuable.  So, a good bit of the original data is still there (Eric points out that some of the MFT data is no longer in the hive file), and some new information has been added.

Friday, October 06, 2017


Eric over at Carbon Black recently posted regarding the Kangaroo ransomware.   Here are some cool things that Eric point out about the ransomware:

1. It's GUI based, and the folks using it to infect un-/under-protected RDP servers.

2. The ransomware time-stomps itself.  While on the surface this may seem to make it ransomware difficult to find during DFIR, that's not really the case at all, and to be honest, I'm not at all sure why this step was taken.

3. The ransomware clears the System and Security Event Logs, and removes VSCs.  As with the time stomping, I'm sure that clearing the Event Logs is intended to make things difficult but to be honest, most folks who've done this kind of work know (a) where to look for other artifacts, and (b) know how to recover cleared Windows Event Logs.

Eric's technical analysis doesn't mention a couple of things that are specific to ransomware.  For example, while Eric does state that the ransomware is deployed manually, there's no discussion of the time frame after accessing the RDP server in which the ransomware is deployed, nor if there are any attempts at network mapping or privilege escalation.  I'm sure this is the result of the analysis being based on samples of the ransomware, rather than due to responding to engagements involving this ransonware.  Earlier this spring, I saw two different ransomware engagements that were markedly different.  While both involved compromised RDP servers, for one, the bad guy got in, mucked about for a week (7 days total, albeit not continuously) installing Opera, Firefox, and a GIF image viewer, and then launched ransomware without ever attempting to escalate privileges.  As such, only the files in the compromised profile were affected.  On the other hand, in the second instance, the adversary accessed the RDP server and within 10 minutes escalated their privileges and launched the ransomware.  In this case, the entire RDP server was affected, as were other systems within the infrastructure.

Types of Ransomware
Speaking of ransomware, I ran across this article from CommVault recently, which discusses "5 major types" of ransomware.  Okay, that sparked my interest...that there are "5 major types". 

Once I started reading the article, I became even more interested, particularly in the fourth type, identified as "Samsam".  Okay, this is the name of a variant or family, not so much what I'd refer to as a "type" of ransomware...but okay.  Then I read this statement:

Once inside the network, the ransomware looks for other systems to attack.

I've worked with Samsam, or "Samas" ransomware for a while.  For example, I authored this blog post (note: prior employment) based on analysis of about half a dozen different ransomware engagements where Samas was deployed.  In all of those cases, a JBoss server was exploited (using JexBoss), and an adversary mapped the network (in several instances, using Hyena) before choosing the systems to which Samas was then deployed.  More recently (i.e., this spring), the engagements I was aware of involved RDP servers being compromised (credentials guessed), and much shorter timeframe between initial access and the ransomware being deployed. 

My point is, from what I've seen, the Samas ransomware doesn't do all the things that some folks say it does.  For example, I haven't yet seen where the ransomware looks for other systems.  Further, going back to Microsoft's own description of the ransomware modus operandi, I saw no evidence that the Samas ransomware "scans the network"...I did, however, find very clear evidence that the adversary did so.  So, a lot of what is attributed to the ransomware itself is, in reality, and based on looking at data, the actions of a person, at a keyboard.

If you want to see some really excellent information about the Samas ransomware, check out Kevin Strickland's blog post on the topic.  Kevin did some really great work, and I really can't say enough great things about the work he did, and what he shared.

Windows Registry
Over on the Follow the White Rabbit blog, @_N4rr34n6_ has an interesting article discussing the Windows Registry.  The article addresses setting up and using RegRipper and its various components, as well as other tools such as Corey Harrell's auto_rip and Phill Moore's RegRipper GUI, both of which clearly provide a different workflow placed over the basic code. 

I've had the honor and privilege to be asked to be involved on a couple of podcasts recently, and I thought I'd share the links to all of them in one place, for those who are interested in listening:

Doug Brush's CyberSecurity Interviews - I've followed Doug's CyberSecurity Interviews from the beginning, and greatly appreciated his invitation and opportunity to engage

Down the Security Rabbithole with Rafal and James; thanks to both of these fine gentlemen for offering me the opportunity to be part of the work they're doing

Nuix Unscripted - Corey did a really great job moderating Chris and I, which brought things full circle; not only did Chris and I used to work together, but Chris was one of the very first folks interviewed by Doug Brush...

Chris Woods over at Nuix (transparency: this is my employer) posted an excellent article regarding three best practices for increasing the efficiency of examinations.  Interestingly enough, these are all things that I've endorsed over the years...defining clear analysis goals, collaboration, and using what has been learned from previous investigations.  I want to say something about "great minds", but the simple fact is that these are all "best practices" that simply make sense.  It's as simple as that.

I ran across something really fascinating today..."wait," you ask, "more fascinating than making your computer recite lines from the Deadpool movie??" almost!  Here is a fascinating article that illustrates not only the steps for how to reveal Wifi passwords on a Win7+ computer, but provides a batch file for doing so!  How cool is that?

LNK Metadata
A bit ago, I'd taken a look at a Windows shortcut/LNK file from a campaign someone had blogged about, and then submitted a Yara rule to detect submissions to VirusTotal, based on the MAC address, volume serial number, and SID embedded in the LNK file.  This was based on an LNK file that had been sent to victims as an attachment.

The Yara rule I submitted a while back looks like this:

rule ShellPhish 
        $birth_node = { 08 D4 0C 47 F8 73 C2 }
$vol_id        = { 7E E4 BC 9C }
        $sid             = "2287413414-4262531481-1086768478" wide ascii

all of them

So, pretty straightforward.  The thing is, over the past few days, I've seen a pretty significant up-tick in responses from the retro hunt, indicating a corresponding up-tick in submissions to VT.  Up to this point, I'd been seeing maybe one or two detections (again, based on submissions) a week; I've received about a few dozen or so in the past two days alone.  This up-tick in responses is an interesting change, particularly because I'm not seeing a corresponding mention of campaigns utilizing LNK files as attachments (to emails, or embedded in documents, etc.).

A couple of things I haven't done is note the first submission dates for the items, as well as the country from which they were submitted, and then downloaded the LNK file itself to parse out the command line, and note the differences.

So, why am I even mentioning this?  Well, this goes back to Jesse Kornblum's premise of using every part of the buffalo, albeit the fact that it's not directly associated with memory analysis.  The metadata in file formats such as documents and LNK files can be used to develop insight based on relationships, which can lead to attribution based on further developing the threat intelligence you already have available.