These days, many organizations have established reasonably efficient Security Operations capabilities, collecting security events from their systems, profiling user and machine behavior, and running investigative searches to support their incident response.
A major cybersecurity incident normally would require an organization to involve digital forensics experts, who usually need to spend significant amount of time to analyze a specific system and present their findings in a way of a timeline (when it happened), forensic artifacts (what happened), and — if they are lucky enough to get data for it — attacker tools, techniques and procedures (how it happened).
Traditionally, digital forensics assumes a paradigm of a ‘crime scene’ that needs to be investigated, a ‘post mortem’ analysis of prior events in an attempt to re-construct how an endpoint or a user account was compromised. It is usually performed under the presumption that the threat has already been contained and that the affected endpoint is now isolated from the network. A forensic examination of a single endpoint might take hours and days, and it really does not scale well as the number of affected or suspected endpoints increases. Furthermore, these days the affected endpoints can be anywhere , from physical PC’s to virtual containers in a multi-cloud environment.
What if you have 200 compromised endpoints (out of your 10,000 endpoint fleet), and you don’t know what else the threat actor could have deployed to your environment to ensure persistence?
Most organizations just do not have capabilities (tools and resources) to conduct security incident investigation at a deeper, forensic level — they have to accept the paradigm of an opportunistic attack, and assume that the problem is completely solved if they can simply wipe the affected endpoint remotely.
At the same time, many threat actors these days are actively seeking for long-term persistence. Can you ever be confident that the threat has not propagated to other parts of your network, in a completely different form now? If you were ever involved in an incident response, you know how hard it is to rule out all the possibilities, and how much guesswork is usually involved in the process as everyone is rushing to declare that everything is back to normal. How many times have you been wondering if other endpoints and users might also be affected, and if there is a practical way to qualify it?
The Need for Deep Visibility
While SIEM platforms usually provide good coverage in terms of the breadth of visibility, there are many security events which usually remain uncovered. Do you know many organizations monitoring all east-west network traffic? or all browser history for all users? full PowerShell payload analysis? every file that was accessed by every user on the network? all software running on remote workers’ laptops?
It is a significant challenge by itself to build an efficient Security Operations and Detection & Response capability. Whatever your efficiency criteria is, it is likely that your SecOps is running in a naturally established balance between the volume of available information (security events data) and its utility (ability of the organization to practically extract value in a way of high fidelity detections).
Getting more data means more visibility, which is generally a good thing (your chances to detect something malicious increase as you can see more events across your attack surface). However, the main problem comes with the fact that SOC operators have to detect anomalies (catch the fish) in a very large volume of data (big ocean), so more data also means more noise (false positives). In practice, the information utility is a function of available people, processes and technologies. Not being able to process all available data, day-to-day SecOps simply seeks to cover as much of the attack surface as possible (more log sources) hoping to detect something there, while forensic-like investigation capability relies on deeper visibility.
In the event of a sophisticated attack, a well-designed SecOps capability should provide some initial insights, indicating deviations from what is considered normal. These initial insights are extremely important, as they can provide a direction to your forensic searches and data collections. Simply speaking, if you trust your SIEM, you have a chance. But what would be your next step?
If you remember how the SolarWinds hack was detected, it all started with the FireEye red team tools which were leaked online. FireEye already had threat actors living on their systems for some time, but they had no idea it was happening until hackers told the world — sharing these red team tools was a big smack in the face for FireEye. Then, it took FireEye just a few days to find out what happened — they had all the required data, tools and people, so when they knew what to look for (someone had access to specific versions of files from a specific repository), it was not difficult to join the dots. The quantum leap (figuring out that the culprit was a presumably trusted software package from SolarWinds) was made very quickly, as soon as it was clear that adversaries had access to the environment.
That’s how human brain works. It is very difficult for us to discover something truly unknown until we know what to look for. But as we know what to look for, we need to have access to as much data as possible, and need to have tools and processes in place to quickly run a deep investigation on a very large scale.
In a world where cybersecurity attacks become more and more sophisticated, enterprises and MSSP’s need new capabilities to conduct in-depth investigations with much greater scale, speed and efficiency.
How can we define Enterprise Forensics capability?
Enterprise Forensics provides deep visibility across the entire attack surface, supporting the investigation of initial anomalies and inconsistencies discovered by SecOps. While traditionally digital forensics is seen as a reactive, time-consuming activity (with a lot of time spent on evidence collection), with Enterprise Forensics organizations can investigate anything on a scale, find out what really happened there, and if it happened anywhere else. It is something that is being done in real time, immediately following threat detection and incident response and tightly integrated with your SecOps SIEM / SOAR / hunting toolset.
Immediate forensic data acquisition becomes an important part of the incident response — and now it can be done remotely and on a large scale, immediately profiling not just specific endpoints, but the entire environments (users, machines and software).
Threat Hunting is the art of working with real-time data (restricted) and catching something (fishing in the big ocean) using a common threat hunter arsenal of alerting, traps, deception and hunting queries. Enterprise Forensics is like casting a very fine net across the entire ocean.
An effective Enterprise Forensics capability should be:
- Profound: Proving deep real-time visibility and extraction of forensic artifacts, RAM, registry keys and file systems objects, including deleted data and unallocated disk space, as well as an ability to intelligently analyze user, machine and software behavior.
- Scalable: Working across the entire attack surface of an organization, and operating at the enterprise-wide scale.
- Remote: Reaching to all assets and networks across your entire enterprise.
- Integrated: Can be invoked on-demands in full integration with existing SIEM / EDR / XDR capabilities.
Enterprise Forensics assists analyst investigations by quickly aggregating, analyzing and visualizing large volumes of data in a way that provides valuable insights and quickly steers an investigation in the right direction. Enterprise Forensics should support any hypothesis that an investigator would like to put to a test, providing real-time data collection and quick outlier detection to direct next steps.
Enterprise Forensics can be applied to profile not only individual users or endpoints, but also to deeply explore behavior of any software in an organization’s digital supply chain. According to 2021 Supply Chain Resilience Report, 27.8% of organizations reported 20 or more supply chain disruptions in 2020, while at the same time it is predicted that 60% percent of security incidents will result from issues with third parties.
Every security analyst is familiar with a common problem in security operations— how one can determine if some rarely used software is malicious or not? Only few organizations can confidently say that they know each piece of software running in their environment, usually by explicitly whitelisting their software and accepting the associated management overhead and user experience degradation.
If that is not what you do, then you need to accept the fact that there is always some small, not well-known software that is living in your user environment. If you run such software in a controlled environment and watch it closely, you would be surprised how quickly you will see it doing something that you will not be able to completely explain. You might see it connecting to some TLS endpoints in the cloud (hidden behind a pubbic CDN) and getting some encrypted data back, or creating scheduled tasks (which often persist even after the software is deleted) which also connect somewhere on a daily basis.
Even if this software is not doing anything malicious at any particular time, it cannot be excluded from an investigation just based on public intelligence data (such as a VirusTotal lookup). With threat actors increasingly targeting smaller vendors and suppliers, digital supply chain compromise is something that always should be considered during a post-incident analysis.
Very often, it is very difficult to profile each piece of software continuously as the malicious indicators are usually very subtle (remember the story of leaked FireEye red team tools), but once you know where to start,then you need to have tools to be able to explore it. While it is usually not practical for a blue team to closely monitor each and every piece of software on a continuous basis, Enterprise Forensics provides a capability to look into any specific software package on-demand and qualify its behavior from the detected threat perspective.
Integrating Enterprise Forensics Into Your SecOps Toolchain
So how does Enterprise Forensics fit into your existing Security Operations capabilities? Quite often people get frustrated with their existing tools, and once they get a new tool it might be tempting to get it to do everything. If I can extract any piece of information from any of my assets in real-time or as a batched query, why cannot I just use it to replace all my detection / response / DFIR tools?
In reality, most likely you do not want to do that. You still want to have your SIEM to generate alerts from your data, proportional to your SecOps capabilities. You still want to have some kind of correlation / orchestration layer (SOAR) to do something with these alerts. And then, you need Enterprise Forensics capability that will get your investigations to the next level, collecting deep visibility data beyond your hunting playbooks.
ThreatDefence provides innovative MDR, SOC-as-a-Service, and proactive cyber defence solutions to MSPs and Enterprises. Our Adaptive XDR Platform provides rich Threat Hunting, DFIR and Enterprise Forensics capabilities, allowing businesses of any size to investigate threats on a scale in real-time, providing deep visibility and forensics data collections from live systems. Our platform aggregates all information that businesses can reach to, would it be within their network, on the Dark Web, or hiding deep into their digital supply chain.
At ThreatDefence, we defined a flexible data collection and analysis model that can be used to activate forensic data collection on-demand as well as historically, obtain visibility across your entire environment, and run deep investigations in real-time.