Article

The top 10 “what not to do’s” in an incident

When facing a cyber-security incident or breach, IT personnel can take well-intended but hasty actions that can actually frustrate incident response efforts.

While it is not possible to have a single response plan that addresses every potential attack, there are certain guidelines that can help you deal with a crisis situation that will help to preserve important evidence for the subsequent investigation. In the same way, there are common pitfalls from certain hasty actions. Consider the following IT responses – and their impacts – when managing an incident or breach:

1. Running AV

A typical response our investigators encounter is where IT staff are running as many AVs as possible with the illusion this will help. AVs often change file systems, deleting malware and destroying the metadata crucial for an investigation. Moreover, they often retain little or no logs about what they found, where it was, and when it was put there. If you know there has been a major security incident, be aware that running one or multiple AV products is one of the most destructive actions you can do.

2. Patching Systems / Fixing Bugs

Patching systems is recommended, refer to recent news items if you need proof. However, when you find something un-patched on your internet facing website or internal server in the middle of an incident, remediating immediately is usually not the best approach. Bear in mind, when you make these changes, you will also destroy the environment that was the scene of the crime, making it more difficult to answer questions about the data at risk, how much of your estate was exposed and what it was vulnerable to. Lacking that information will slow the incident response process so it is important to preserve the scene before patching anything.

3. Quick, pull the plug!

You can’t be hacked without power. Very true. However most of the information you will need to answer questions about an attack reside in live system memory. Without this information, questions such as the following will be difficult for the incident responders to answer:

  • What was the malware connecting to internally and externally?
  • What credentials were at risk?
  • How long has this attack been occurring?
  • Is it active right now?

If it’s not ransomware, attackers can do little with malware they can’t control so think twice before pulling plug in an effort to contain the attack.

4. Moving / copying malware

When IT teams do find malicious files and malware on their systems, a common approach is to move it to a folder on their desktop called “malware” while using the infected system to do some backyard malware analysis. This usually results in the loss of important information regarding where the malware originated including date and time stamps, all whilst the still active attacker watches the IT team attempt to figure out what happened.

5. Uploading malware to Virus Total

Virus Total is fantastic. However, when you upload malware to this and other sandbox services, the files are available for anyone else on that platform to download, tipping off the world as to what malware was active on your systems. Moreover, attackers usually monitor for when their malware has been uploaded to Virus Total as an early warning sign that you are on to them. Rather use the search function or switch to offline or private malware analysis sandboxes.

6. Immediately blocking C2 channels

Blocking Command and Control (C2) channels is an important aspect of incident containment, however, it must be executed at the right time. Often the right time is not immediately once you’ve found it, as attackers can and often do have multiple C2 Channels. Blocking the C2 channels carries the risk of alerting the attackers, causing them to change their behaviour and start creating more channels in increasingly obscure ways or to become destructive. This will make it more challenging for incident response to get ahead of the attack. As such, make sure you balance having sufficient intelligence on the attack with the data leakage risk before deciding to block C2 channels.

7. The “Just rebuild it” approach

When you rebuild a system, you lose almost all data about the attack, and will likely re-introduce the security risks that got it compromised in the first place. If you are in the middle of a major incident, any system being rebuilt is worth preserving first in case an investigation is needed. In normal day to day incident management, the same rule applies. Finding out that the source of an attack was a workstation you had in fact rebuilt is a hapless scenario as questions about how or what occurred will remain unanswered. Worse you can end up playing ‘whack-a-mole’ with the attacker as they repeatedly re-establish a beach head using the same attack vectors.

8. There’s no space, delete the logs

Operational IT and security teams need to work hand in glove when an incident is underway. Often, operations teams will help carry out instructions to move large amounts of data around for analysis. Usually, when space is limited, these teams will delete what they believe to be valueless data such as logs. In fact, these are exceptionally valuable to the incident investigation. Make sure your security and operations teams have first responder training and know what you are trying to achieve so that valuable data is not lost.

9. It must be an insider! 

Attribution is a common request we receive from clients. Security teams believing an insider must be to blame is a common hindrance to the incident response process. The belief that you couldn’t have been hacked from outside is a natural one and this normally leads to looking internally. However, the administrators and managers of compromised systems who have the knowledge of how these systems work and what might have happened are the people you now suspect. Being on the wrong side of these individuals can obstruct an efficient investigation. Whilst insider threats are real, the vast majority are perpetrated from outside the organisation or the country, where local law enforcement cannot reach them. Un-authorised privileged account use is an expected attacker behaviour. Be observant, keep an open mind, but keep your internal team’s on-side until you have the full picture. 

10. Assuming the best

By far, this is the most common mistake security and response teams make: assuming the attacker did not use their access as leverage to move laterally towards their goal. Unless you have caught an incident early at the point of entry, assume the worst, it is safer in the long run. Look at what an attacker could have done, and where they could have moved. Test those theories to prove the worst did not happen. Our experience shows that assuming the best case scenario, usually leads to the worst case results. 

 

 

Accreditations

MWR is an accredited member of The Cyber Security Incident Response Scheme (CSIR) approved by CREST (Council of Registered Ethical Security Testers).
MWR is certified under the Cyber Incident Response (CIR) scheme to deal with sophisticated targeted attacks against networks of national significance.
We are certified to comply with ISO 14001 in the UK, an internationally accepted standard that outlines how to put an effective environmental management system in place.
MWR is certified to comply with ISO 27001 to help ensure our client information is managed securely.
As an Approved Scanning Vendor MWR is approved by PCI SSC to conduct external vulnerability scanning services to PCI DSS Requirement 11.2.2.
We are members of the Council of Registered Ethical Security Testers (CREST), an organisation serving the needs of the information security sector.
MWR is a supplier to the Crown Commercial Service (CCS), which provides commercial and procurement services to the UK public sector.
MWR is a Qualified Security Assessor, meaning we have been qualified by PCI to validate other organisation's adherence to PCI DSS.
As members of CHECK we are measured against high standards set by CESG for the services we provide to Her Majesty's Government.
MWR’s consultants hold Certified Simulated Attack Manager (CCSAM) and Certified Simulated Attack Specialist (CCSAS) qualifications and are authorized by CREST to perform STAR penetration testing services.