How to Use Logs for Forensics After a Data Breach

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

Despite the best precautions, it is impossible to protect your network against every attack. When the inevitable happens, your log data can be critical for identifying the cause of the breach and collecting evidence for use in the legal system. That is, if your logs were properly configured before the breach happened.

Log files are generated by all data processing equipment every time an activity takes place. It is an electronic fingerprint with an added element: we know at what time that fingerprint was generated, so we are able to reconstruct what happened and in what order. Analyzing logs is the primary way of doing forensics, and properly managed logs can also be used as evidence in a court of law for prosecution purposes.

Data loss a mystery for many businesses

When you enable logs you can typically specify: 1) the severity level, which essentially specifies how severe the event needs to be to deserve creating a log message and 2) the level of detail captured in the log message, the so-called verbose level.

There are eight standard severity levels, from high-severity level 0 (called emergency, in which only emergency and extremely critical events are logged) to low-severity level 7 (called debug, in which almost any minute event is logged).

Verbose levels are less standards and vary on the vendors, makes and models of equipments.

The tradeoffs are obvious:

High severity, low verbose level:

* Few messages; each message is short.

* Little storage requirement, but you won't know much about what happened.

Low severity, low verbose level:

* Many messages; each message is short.

* Medium storage requirement and you will know when something happens, but you won't know much about what happened.

High severity, high verbose level:

* Few messages; each message is long.

* Medium storage requirement and you will be able to tell a lot about critical events but there's many events in which you'll have no visibility at all.

Low severity, high verbose level:

* Many messages, each message is long.

* High storage requirements but you'll know a lot about any event happening.

The right approach is to apply a risk-management method to your logs. As such, you identify the set of systems that are important for you to keep logs from.

Indeed, it is not necessary to have a one-size-fits-all approach to severity/verbose; instead, you want to crank up the number and level of verbose of logs for important systems and dial it down for non-important systems.

We recommend creating four groups of systems, each corresponding to a category of severity/verbose level described above, and apply a different level of logging to each category.

Remember the rule of thumb: in case of doubt, go ahead and log it because you never know when you'll need a log. It is tempting to use debug-level logging, however, it typically generates so much information it will slow the systems down, so use it with caution; a typical setting is severity level 6 -- informational -- which generates lots of information without performance penalty.

Collection and Storage of Logs

Once you know the level of severity and verbose level of the logs you want, you are ready to answer the second question: "Where do I keep the logs?"

This question is important because some systems allow you to either store the logs locally or send them in real-time to a remote server. In fact, one of the first things the bad guys will do when attacking a system is try to tamper with the local log file to hide their tracks or plant fake evidence to send you running the wrong way.

Again, there are pros and cons for each of these methods:

Local storage:

* No need for the logs to be transported, but introduces operational complexity to properly manage rights and permissions on the directories containing the logs.

* Window of opportunity for bad guys to manipulate the logs in case a system gets hacked -- logs cannot necessarily be trusted.

* Operational complexity when doing forensics because of obligation to scour from system to system, each having its local logs.

UDP syslog to send logs to central repository:

* Unreliable transport mechanism with no guarantee of delivery but no need to manage each local system's log storage directories.

* Little possibility for bad guys to manipulate logs as they are being sent in real-time.

* Centralization of all logs providing a unique window into separate sources of logs.

Dedicated agent to send logs to central repository:

* Operational cost to deploy agents to every single source server from which we need to collect logs.

* Reliable transport mechanism.

* Little possibility for bad guys to manipulate logs as they are being sent in real-time.

* Centralization of all logs providing a unique window into separate sources of logs.

The right approach: Use a risk-management method to assess which makes the most sense for your environment. In high-security environments, you may want to deploy agents in each system you want to collect from, although the operational cost could be high if your scope contains many systems.

The middle ground, and probably the easiest method to get you up and running, is to send logs via the syslog protocol to a remote server. However, sending it to a dedicated log management solution is actually even better. This will insure all your logs are centralized, which will facilitate getting the most value out of them.

Next page: How long should you store logs?

1 2 Page 1
Page 1 of 2
Shop Tech Products at Amazon