Friday, March 20, 2009
More SIEM Vendor Leap Frog
Dominique Levin (EVP of Strategy/Marketing at Log Logic writes in this Network World article posted last night (03/19/2009) about the development and convergence of SIEM and Log Management. I'm glad that Log Logic finally understands the model and is trying to address a broader market opportunity by incorporating SIEM into their offering. If you didn't already know, last month Log Logic partnered with ExaProtect to be able to provide a more native (to Log Logic) SIEM solution. As a side note, it has been my experience that you can make other SIEM's work in conjunction with Log Logic (at least in an unidirectional manner) by forwarding events to a SIEM from the Log Management platform. I hope that Log Logic (and other vendors) continue to read my SIEM Vendor Leap Frog post and take some of the challenges in current technologies to heart. Bi-directional search between Log Management and SIEM, shared user authorization and authentication techniques, more robust shared management options - all of which really need to evolve from these types of offerings. I hope they and the other vendors look at this as an opportunity to truly merge the products into a solution versus the current "bolt-on" approach some in the market have taken. It is not enough to just have the technology available, the vendors must understand how the customers will use this in the field and make it more simple to deploy, manage and ultimately actually use these products. ArcSight, RSA and other key players are working on this very diligently and have made great strides to making this vision a reality. It's still nowhere near perfect but I think it will get much more emphasis over the next 12-18 months or so as more people demand better integrated solutions during their acquisition or renewal cycles.
Another side note: At the recent IANS DC forum and again at SOURCE Boston Peter Kuper noted that security vendors are going to have to make more of an effort to partner with their customers to really thrive in this market. Peter also made the point that customers have to demand more value from their vendors in order to show value to their own management. I think everyone should take that message to heart!
The information presented in the Network World article further validates some of the positions I presented in my SIEM Vendor Leap Frog post earlier this week. For that matter so does a recent "tweet" from NitroSecurity (Twitter: @nitrosecurity) as well as, a "tweet" from RSA's SIEM Solutions Evangelist Paul Stamp (Twitter: @tknsecurityguy) and a recent post Paul Stamp in his personal blog.
The idea of combining Log Management and SIEM isn't novel (in fact it is several years old) but only recently has it become the "standard" for gaining "Enterprise Visibility" and then moving towards making security operations work more fluidly through the use of a SIEM. The combining of Log Management and SIEM is not trivial to accomplish but can be done quite well and adds huge value, if architected correctly.
The article explains the evolution of SIEM through the years, beginning with Perimeter Security "Use-Cases", moving through certain "Internal Monitoring" Use-cases and then describes how SIEM gained critical mass through "Compliance" Use-Cases. I will not debate the relevance of SIEM in each of these situations other than to say - Both the Log Management and SIEM's product sets are nothing more than tools. They can be a powerful resource in the right hands and have a great many potential applications, but the team wielding that power has to know how to apply it and when (and when not to). While it is true that some SIEM platforms are flexible enough to move beyond simple network security based use-cases, the complexity involved in making those transitions requires expert touch. Let's get these systems working correctly in security first then we can think about expansion into other areas (business intelligence, etc). There is no magic fairy dust here. It is hard work at each and every step, but there is a payoff. You can automate many labor intensive tasks including identification and escalation of alerts, which should free up some analytical cycles to find new and more complex activities that they can turn into "events of interest" for future correlation. BTW I didn't mean to dismiss the value of Log Management and SIEM outside the context of Security - it is possible (it requires great flexibility in the vendor solution but I know many organizations that have made interesting solutions work in very unique ways) I'm simply saying there is a lot more work we can do to get the actual security focused portion of these solutions to work better before we try and show value (and over exert our reach/resources) in other areas.
Let's keep working together to encourage the right partnership and evolution from our vendors! They are doing the best they can, but it is up to the community at large to focus them in the right direction.
Saturday, March 7, 2009
Combined Log Management and SIEM Architecture Benefits
A well-maintained Log Management and SIEM deployment can significantly reduce the time to Incident Identification and really enhance your overall information security capability. The diagram attempts to illustrate that all information from the Event Sources are processed through the appropriate Log Collection Mechanism and then forwarded to the Log Management System.
The Log Management system eats, stores and can regurgitate everything put into it. The Log Management Solution also can further refine the data set and forward only applicable events for analysis to the correlation engine (SIEM) through the use of intelligent “tagging” of events.
Overall data reduction is only part of the end goal, more importantly we want to ensure the right data is forwarded and evaluated so that we can gain from the overall efficiencies offered by the SIEM. In short we’re ensuring the system has the correct information available to it so that it can respond to the questions you want to ask of it and reduce the garbage as much as possible.

This post is a mirror of my post at http://blog.decurity.com
SIEM Best Practices: Combined Log Management and SIEM Architecture Benefits
Tuesday, February 24, 2009
SIEM Best Practices: Evaluation Criteria
Decurity often has the opportunity to our customers find the right Log Management and/or SIEM solution. We are honored that our customers trust us with that very important question so we wanted to take a moment and explain our requirements gathering/documentation process for vendor selection and hope that our explanation helps a few of more folks out there! We also get asked by Vendors on how they can improve their products, but that’s a entirely different blog post.
In March of 2008 I authored a couple of posts related to SIEM pre-requisites:
SIEM Best Practices: Very Basic SIEM Implementation Success Criteria and
SIEM Best Practices: Before you buy.
In those posts I tried to create a baseline of information customers looking to purchase and implement a SIEM solution should have before engaging the vendors. The point of those posts really boiled down to this idea: You must have a strong set requirements defined up front for the vendors to indicate how they meet that requirement. Allowing the vendors to "work their magic" and define your problems is roughly equivalent to handing them a blank check. Along those lines I wanted to highlight a strategy we employ when helping company’s to define their SIEM requirements by presenting a sample of the categories of questions we ask of the customer and the vendors during evaluation process.
A couple of quick notes before we begin with the listings:
1.) All of this assumes you’ve answered the initial “Key Problems we are trying to solve” question and the answer is something more tangible than to meet PCI,SOX, Audit requirements.
1a.) If events per second (EPS) is your key measurement you are looking at the wrong product set - seek out Log Management Tools first.
2.) It is also important to note that when we perform the evaluation each of these stated technical requirement categories breaks down into a dozen or more actual testing criteria that is prioritized according to your requirements The "Sample Questions" presented are only a very quick overview of the types of questions that fall into that category.
3.) This post is simply highlighting the fact that significant thought should be given to this decision. Don’t worry if you need help – we’re here.
Sample Categories of Requirements to consider:
Common Requirement Categories
Category (Sample Questions)
Access Control (Application, User, flexibility, inherited controls, etc)
Authentication (LDAP, SSO, AD, Internal, other)
Architecture (Reliable and Scalable)
Event Sources (Supported Technologies and versions, Connection Methods for each, Data Parsing Errors, Normalization Data Loss, Categorization Correctness, Structured/Unstructured Data Handling)
Log Management (Is the Integration Bi-Directional, easy to implement, etc)
Event Forwarding (Security, Methods, Low-Bandwidth options, etc)
Overall Security (System and the data)
External Integrations: (Tool Integration, Ticketing System Integrations, etc)
Storage Requirements (Compression, Costs, Management)
Storage Flexibility (NAS, SAN, Internal, Offline/Online, Tiered Storage)
Data Processing (Internally how does the system handle new event sources with uncommon field requirements "unstructured data")
Installation (Does the solution match our standards?)
Patching and Upgrade (Level of effort required for Minor and Major Versions)
Overall User Experience (Can I see what is important quickly and easily? Can I drill down quickly and intuitively?)
Standard Reporting (Easy, Flexible, Exportable)
Advanced Usage Requirement Categories
Category (Sample Questions)
Basic Alerting Criteria (Pattern Matching or Aggregation/Counting)
Basic Correlation (IF ,THEN, ELSE, AND, NOT, OR type Statements)
Advanced Correlation (Meta Analysis of enriched and/or raw data across technologies, time and result sets in real time.)
Statistical Analysis (Flexible event statistics that can be used in alerting or to enrich data sets for correlation)
Custom Reporting (Can I create my favorite report or extend it)
Data Mining (Can I easily look for patterns across the entire DB?)
Data Visualization (Can data viz be integrated and does it matter for me?)
Vulnerability Integration (Is the correlation useful for our environment and is the reporting useful?)
Network Modeling (How hard is to model our environment and what value is lost/gained?)
Asset Modeling (Can I easily assign systems to relevant categories and assign priorities, can I update them easily, etc)
User/Activity Modeling (Can we realistically “profile” users or activities and alert on deviations?)
External Threat Feeds (Does the vendor or a partner provide daily updates for Hotlists?)
Built in Mgmt Tools (Does the vendor provide a way of measuring the health of the system?)
Other Important Criterion
Category (Sample Questions)
Company Performance (This is becoming more and more a key decision factor.)
Support (What can I escalate, response times, expertise, RMA)
Thought Leadership (What is the vision for the technology?)
Training (Do I need 4 weeks of training to use the product? If so how many types of training opportunities are available?)
Services Support (Do I need 12 weeks of Services? How can I guarantee I don’t get the new guy? Is the team compensated on billability or Customer Success?)
Content Updates (How often can I receive content updates? Do I need constant "workshops" to move forward, Are there external providers that can help?)
Licensing Model (Price can be greatly affected by various pricing models, make sure you understand the total cost of all phases of your deployment before you begin).
This post is a mirror of my personal post on http://blog.decurity.com
http://blog.decurity.com/index.php/dec_template/more/siem_best_practices_evaluation_criteria/
Friday, February 20, 2009
Preview of Decurity’s New Enterprise SIEM Subscription Service
Simplistic overview of the O&M Problems most SIEM customers face:
1. Today, many SIEM customers have 2 or more Full Time Engineers (FTE’s) supporting, managing or otherwise dedicated to their SIEM and still find themselves using only a small percentage of the SIEM’s real potential.
2. Many customers know that there is more they can “do” with the SIEM but simply can’t get there from where they currently stand. Frustration continually builds up.
3. Most customers simply don’t know where to go next after the initial implementation or consulting engagement.
4. Hiring the best SIEM Experts is really, really expensive especially when you factor in all of the downtime caused by change-control or other mission critical tasks that pop-up and waste valuable time.
Simplified solution overview:
1. Decurity will help ensure you purchase the right tool for your needs and ensure the tools are configured optimally for the long-haul.
2. Decurity will provide pre-packaged and custom-built content delivered to you on a recurring basis to help expand your usage of the SIEM and extract the most possible value from the tool.
3. Decurity is there for the long-term, working to understand your changing needs and using our expertise to help guide your efforts accordingly.
4. Decurity leverages the most experienced SIEM team in the industry to deliver these services in a very cost-effective manner.
A little more detail about what is including in our Enterprise SIEM Subscription Service:
Installation/Expansion:
Decurity can help during all phases of your SIEM deployment. Decurity will work with you to help you define the requirements, guide you through vendor selection, architect the solution, implement or expand on your existing infrastructure. We partner with you to ensure you receive the best possible advice through the lifecycle of your SIEM deployment.
Quarterly SIEM Healthchecks:
As part of this service offering on a quarterly basis Decurity will work with your team to ensure your SIEM is performing at it’s most optimal capacity. Typically, much of this work can be accomplished remotely further reducing your team’s time and cost commitments. We’ll quickly identify any issues, offer remediation plans and help you implement any necessary changes.
SIEM Content Updates:
Our experts will develop SIEM Content to help your analysts more accurately focus on the “Events of Interest” for your organization. Our solutions are categorized by Event Source and/or by Problem-Set to help you better understand which content will add value to your environment. Solutions will be updated on a recurring basis (daily, weekly, etc) as new Event Sources, Problem-Sets and Solutions are identified and/or refined.
Here are some examples of SIEM Content we’ll update/refine for you:
• Active Lists: For Example: Hot IP’s, Domains - We maintain a list of Hot IP’s and Domains that is updated Daily (as necessary).
• Active Channels: Events of Interest, Interesting Analytical Views
• Data Monitors / Dashboards: Statistical Analysis, Performance Measurements, Security Status Dashboards
• Filters: (reusable queries)
• Reports/Query/Trends: Reports that focus on measuring success or providing “Actionable Intelligence”
• Correlaton Rules: Basic and Advanced Correlation relevant to the Problem-set and customizable to meet your specific organizations needs.
• Workflow and Notifications
• Tools: Integration of tools/macros/scripts
• Pattern Discovery (Profiles): (ArcSight Only) By providing new and updated profiles based on Event Sources or problem sets we’ll help you gain the most from this powerful tool!
Added Value:
As part of this offering customers also have the opportunity to submit new problem-sets for us to solve - simply work with us through our support system to understand the problems you are trying to solve and we’ll help you develop customized solutions. Instead of investing in costly consulting engagements you can leverage this service to create solutions.
Log Management and SIEM integration Support
We’ll help you most effectively use your Log Management and SIEM tools to complement and enhance the overall value of both solutions!
We’ll ensure from the the data is intelligently processed providing you with the information you need but not killing your SIEM and overwhelming your team. From the Event Source through the “collector” into your Log Management Solution and finally as it reached your SIEM we’ll work with you to ensure the right information is collected, stored, forwarded and analyzed to maximize functionality and overall value by reducing storage/processing costs.
Summary:
No matter where you stand with your SIEM deployment Decurity’s Subscription service will benefit you greatly. If you’re just getting started we’ll save you the 2 years of frustration your peers enjoyed. If you’re more mature in your SIEM efforts we can help ensure you’re really getting all the value you possibly can from your system. Our goal is to make this as simple as possible so that you can work on the output of the SIEM and take action to protect your enterprise. We’ll make the SIEM work FOR you!
Sales Information: We want to work with you to understand your needs and will be more than happy to schedule some time to talk more about how Decurity can help you with your SIEM and Log Management needs. Please send us an email at sales at decurity dot com with any questions you might have and we’ll get back to you (usually the same day).
About Decurity:
Decurity supports the Fortune 500 Globally and many US Government customers on a true enterprise scale. We are focused solely on Security Operation including the usage SIEM and Log Management Solutions to enhance the Incident Response Process. Our experts have been responsible for hundreds of Log Management and SIEM implementations across the world. We will do what it takes to make you successful!
The preceding has been a repost of my blog entry at: http://blog.decurity.com/index.php/dec_template/more/preview_of_decuritys_new_subscription_service/
Update 1 (23 Feb 2009) : Updated http://www.decurity.com reference link: http://www.decurity.com/SIEM_SUBSCRIPTION_OVERVIEW.html
The webpage offers additional explanation about the initial rollout of this service offering which is centered on the Arcsight ESM and ArcSight Logger products. Future releases will offer support for products such as Splunk, Symantec SIM, RSA Envision, etc.
Wednesday, November 26, 2008
SIEM: The Quickening Begins
Though unlike Highlander, I hope that in the end there can be more than one. SIEM is NOT dead, but if High-Tower’s recent announcement is any indication it certainly will become a thinner herd in the very near future.
How many vendors have both viable solutions and can realistically survive in SIEM and/or Log Management for the long-term? ArcSight, RSA EnVision, NetForensics, Q1 Labs, CA, NetIQ, Symantec, eIQNetworks, Splunk, Cisco, IBM, Nitro Security, TriGeo, Tenable, Log Logic, LogRhythm, Intellitactics, Sensage, Exaprotect, Alertlogic, Checkpoint, Novell and IBM. Not to mention MSSP specific solutions or vendors I may have missed.
A few years ago there was a period of acquisitions / consolidation (Cyber Wolf, E-Security, Micromuse/GuardedNet) but if this article from socaltech.com is correct than this is the first outright collapse from a SIEM product company that I can think of off the top of my head. High Tower had reinvented itself over the past 18 months from the ground up. They had some very dedicated and talented folks on staff. When they rebuilt CINXI they had a simple but relatively effective tool for the SMB marketplace. Most importantly to me they always seemed passionate about making life better for their customers. That moves me to another train of thought....
SIEM: Time to re-focus?
In my mind that “passion” for customer success is what the SIEM market sorely needs again. The main focus of many vendors has turned to targeting smaller companies and/or providing specifically branded solutions striving to solve all the world’s problems related to PCI, etc.
It seems to me that the magic SIEM once had, has been lost. The “magic” was the partnership that existed between the vendor and the customer where the entire vendor organization pushed relentlessly for customer success! The vendor would sit with the customer and pull use-cases (teeth) from the customer. Then together they would develop customized solutions to those defined problemsets. The initial process might take weeks or even months to accomplish because it is a learning effort for the customer but the level of trust, understanding, collaboration and overall value to the entire security team is tangible. Thinking through how to define the necessary data elements, ensure time sync is in place, obtain and centralize the data, refine analysis processes, enact acl’s, create reports and facilitate actions is a difficult but crucial element to ensuring your can effectively monitor and identify incidents on your network.
What needs to happen?
The vendors need to make the products easier right from the start and work constantly to add value to the overall solution. We need to help the customer understand the value of the event sources they have in place today and which event sources add value in conjunction with current/planned event sources. What information can remain in the log management solution and what is best feed to the SIEM? What are common problemsets/solutions and how can they be enhanced/updated more frequently? We need to collaborate better and level the playing field for the “good guys” for a change.
SIEM has significant value:
Implementing a SIEM correctly forces you to look at and specifically address all of these issues mentioned earlier. SIEM also provides benefits including enterprise-wide changes in enterprise visibility, log standards, time sync, IT and business unit collaboration, reporting and overall security posture. It certainly doesn’t hurt that the efficiency and overall effectiveness of your security team greatly enhanced by having a good process, comprehensive enterprise visibility, the right tools and trained professionals!
Summary:
- There are a ton of Log Management and SIEM vendors and the smaller ones will continue to be bought or fail through the next 12-18 months.
- The Log Management and/or SIEM solutions you put in place need to be driven by real world and well defined problem-sets and you do need to worry about long term viability of the company, many won’t exist this time next year.
- Both Log Management and SIEM are tools that fit into an overall process within your organization and the entire process needs love to be successful!
Monday, November 17, 2008
Netwitness Investigator
NetWitness announced today that it is providing a free version of it’s Investigator product to the world!
I’ve previously blogged about what I think about critical success criteria for Security Operations and Incident Response and that within the Collection activities of the SOC, very little has more importance to me than Full Packet Capture (some call it Deep Packet Inspection or many other newer marketing terms). In the end it is the ability to review and reconstruct activity on your network as it occurred. In a couple of very large organizations I support I’ve been lucky enough to have NetWitness as the tool we use to support those needs, now the rest of the world can start to look at this fantastic technology for their network analysis purposes. Today NetWitness announced it has released a version of their Investigator software for free. This is the full version of Investigator with both the ability to capture and decode/display live and previously recorded network traffic. New functionality exists throughout the product and as I have time and/or if readers ask I’ll blog on those features with screen shots and/or video in the coming days.
Some of the highlights of new or increased functionality in this Release:
Free license supports 25 simultaneous 1GB captures.
Network and application layer filtering (e.g. MAC, IP, User, Keywords, Etc.)
IPv6 support
Full content search, with Regex support
Bookmarking & history tracking
Hash PCAP on Export
Thoughts on Installation:
Download, Installation and User/Computer Registration was very simple. I’ve installed it on Vista 32bit and 64 bit platforms without much of an issue at all (make sure a recent version of winpcap is installed). They have some great pointers in the documentation if you get stuck. I’ve played with many of the parsers to decode application traffic and hope to have some time to talk through my likes and areas for feature enhancements at some point in the future. I will use the community.netwitness.com forum for those comments though (I suggest all users do the same).
Initial User Experience:
For folks familiar with previous versions NetWitness the first thing you’ll find is the new graphical representation of data across the extracted timeline. You can toggle it on/off depending on your preferences. I’ve found that for looking over larger time frames it does help to narrow scope to certain peak time activities that become obvious using this method of identification. Looking at data is easy once you spend a few minutes with the product. The documentation is solid and accurate so I do recommend taking a look at it if something isn’t intuitive to you. Data can be captured live or imported as pcap -both worked flawlessly for me in my testing. I imported several of the pcap files on openpacket.org just to see variances in presentation between NW and other tools. Once you have your data set available “pivoting” through data creates filters that streamline your searching and allows you to drill all the way into the session. You start by viewing report data (Summarized data) and you can control the “meta” data that is displayed (the values) and re-order it to meet your needs (under Options look for “Reports” tab for more information).
Advanced users Tip 1: I’ve been told and found through recent personal experience that enabling “querystring” and other non-default selections (Options>Reports) provides some great starting points for investigations.
Advanced user Tip 2: There are ways to add new data elements for actual indexing and therefore later reporting/pivoting but that is for a more detailed post later. Think assigning attributes to IP’s or subnets and you’ll see where I’m headed with that thought.
User Experience Continued: As you find interesting activity you drill into the data by clicking on the link(s) provided.
Hint: The number next to the text tag indicates the number of sessions and if you click on the number it will bring you right to those sessions. If the number is exceptionally high you might want to continue to drill into the data to refine your search a bit more before rendering those sessions.
Once the sessions are presented you will have the option to view it in a few different ways - by default NetWitness will try and present it in the most appropriate manner, but if you prefer Hex or Packet views to session replay you can select those views very easily.
Netwitness also provides a few very easy to follow youtube videos for those of us too lazy to read the documentation and wanting to get started right away.
Netwitness Rules
These rules (Application and Network) sit on the Decoder in a production deployment. On the Free version the decoder software is bundled into the “local capture” functionality of Investigator. Note: As you might expect remote captures are only supported with the enterprise license. The NetWitness help files provide a link to a sample rules file you can download and test. Modifying the rules is very easy. Export the rules filet, edit with any text editor and then import to Investigator. Alternatively you can use the NetWitness provided GUI “Intellisense” and it tells you when you make a mistake in your syntax by changing color. You can reorder the rules and there is some precedence to the rules (first match I think) but the engine is incredibly efficient in terms of both number of rules and number of variables in the rules.
Example Rules:
Let’s take a real world example to show how easy the rules language is. Recent publicly available intelligence from SANS ISC provides some interesting activity we might want to monitor for in order to track down command and control channels. I’m simply presenting NW as a fairly flexible alternative or enhancing event source.
1.) SANS Example 1 - # DNS responses which contained a domain that belonged to one of a long list of dynamic DNS providers;
name:DECURITY_DRAFT:Suspicious_Domains:1;rule:alias.host ENDS ‘.’domain.xxx’, ‘domain.xxy’ ; order:1 ; alert ; type:application
Note: Just obtain your favorite list of domains and update this rule and your off to the races…
2.) SANS Example 2 - # DNS requests for a hostname outside of the local namespace which were responded to with a resource record pointing to an IP address within either 127.0.0.0/8, 0.0.0.0/32, RFC1918 IP space, or anywhere inside the public or private IP space of the organization. Because I’m limited on time I’ll simplify this one a bit and focus on 127.0.0.2 for example just to get the point across - it can be easily extended.
name:DECURITY_DRAFT:Suspicious_DNS_127.0.0.2:1;rule:service=53 && ip.dst=127.0.0.2;order:2;keep;alert;type:application
3.) In the Sample Rules file NetWitness also provides protocol to port matching functionality to finding non-standard ports for protocols is easy to alert on. Here is one example:
name:SAMPLE_Vulnerability:NonstandardPort:DNS;rule:service!=53 && tcp.dstport=53;order:51;alert;type:application
I’m sure these examples I provided can be refined greatly (NW Reps/Users if you want to jump in here and provide comments for improvement I’ll all for it!) and at some point maybe we’ll post updated rules files to enhance your NW experience!
Summary:
Now that this tool is free for your use I fully believe that everyone should try it out. I believe there is significant value in using NetWitness not only for previously captured .pcap data but in live production networks as an operational tool for analysis. When you combine the capabilities of NetWitness with your other data and toolsets (Log Management, SIEM, IDS, Server Logs, etc) you really have a better set of data to comprehend what is happening on your network and you can significantly reduce the time to identify the incidents that are occurring on your network.
Friday, November 14, 2008
DHS Conversation Follow-up: Summary of Einstein and TIC
I’ve received a series of follow-up emails, phone and texts asking about “Einstein” and Trusted Internet Connection (TIC).
These inquiries were initiated from people reviewing a quick blog/twitter discussion between myself and Martin Mckeay. BTW if you haven’t read his follow-up posts about the conversation with Secretary Chertoff, you should take the time (he provides an audio recording of the conversation and links to the summaries provided by the other attendees as well).
Ok back to the subject… (Note: Even though I have more background on both programs than most people do - All of the information presented in this blog is based on publicly available information.)
DHS Under Secretary Robert Jamison (DHS National Protection and Programs Directorate) referenced the Einstein v2 program in his blog while he defending the significant improvements that DHS has made over the past few years. DHS published a ”Privacy Impact Assessment” for Einstein 2 back in May of 2008.
I think if you read through the privacy impact assessment you’ll gain sufficient understanding of the program. Einstein basically captures and stores Flow Data (Session Data) and with Einstein v2 the technology adds very basic IDS capabilities and some limited packet capture.
The technology itself isn’t the key factor in the program (after all IDS has lived, died, resurrected, ascended many times over the years). It is the use of the technology across the government in conjunction with the Trusted Internet Connection (TIC) program and the significant growth of US-CERT to provide a better analytical capability that adds the necessary “teeth” to the overall program.
TIC - My previous post has the necessary references to learn more about TIC previous blog article. The short summary is that TIC is part of the Whitehouse/OMB program to consolidate Internet Gateways across the Government. Reducing access points and providing for more centralized access, lowering costs and most importantly to me - increasing visibility. Einstein is one of the many technical requirements of TICAP (TIC Access Providers) to implement in order to be a certified TICAP. The other technical requirements can be found in this TIC Capability Matrix excel document.
DHS said they were going to look closely at DoD and model after their capabilities (awesome). DISA’s 09 O&M budget estimates provide some good insight to their methodology. Obviously DoD has a different motivation (offensive and defensive) for information security, but the foundation is there.
I’m not going to start an tie in Federal Desktop Computing Core (FDCC) and other related programs into this other than to say - the government is trying to take a more holistic approach to its defensive posture. Given the state of much of the government’s security posture (have you read recent GAO audits?) there is still a long way to go. Recently, it seems that both the funding and congressional support is there to make the programs move forward in a manner that finally goes beyond unfunded and ill-defined mandates. Any way you look at it, cyber security over the next few years in the government space should be a fun ride.
Now with that information out there - I’m off to try and comprehend President-Elect Obama’s cybersecurity stance (or at least as much of it that is published) so I can try and understand what the next steps are cyber security across the government. Who knows - maybe he’ll read his twitter account and ask a few of us to provide ask questions and provide our opinions!
Monday, November 10, 2008
2008 IANS Pacific Information Security Forum
Rocky DeStefano will participate as faculty at the IANS 2008 Pacific Information Security Forum in San Francisco Dec 2-3, 2008.
If you’re not sure if IANS can add value to your organization, take a look at the faculty. Trust me, even a few minutes with these folks is an amazing experience.
The IANS Pacific Forum is in San Francisco, December 2-3 2008 at the SF Marriott. If you’re in the area during that time - please register.
This is one of the very few public presentations I do and I love the format, thoroughly enjoy the discussions with the IANS faculty, appreciate the interactive participation with great attendees and always have a great time at these events. Can’t wait for this one!
I will be staying in the SF Bay area the through the 5th to meet with customers, SIEM and Log Management vendors and anyone else who would like to catch up! email me if you want to get together!
Rocky
DHS Blogger Roundtable
I found out through twitter that Martin Mckeay has a blog post “What would you ask the Department of Homeland Security Secretary?” on his Network Security Blog.. Boy, I’d love to participate in that session....
I tried to respond using the online form but it timed out and I think I lost my responses. I’ll do my best to reconstruct them from memory…
If I had a few moments with Michael Chertoff, the Secretary of the Department of Homeland Security I’d ask about the following subjects:
1. Einstein Program Goals
2. Trusted Internet Connection (TIC) Goals
3. Government/Commercial Cooperation
4. My Shoes
1. Einstein is essentially netflow data (session data) that is available at the DHS level by participating Departments/Agencies. Participation is now mandated by TIC but that is a different story. Session data has its usefulness but that value is severely limited. In addition to the publicized IDS-like additions coming in V2.0 what does the road-map hold for this project and why does the government not yet have a full packet capture capability (also mandated by TIC)?
2. TIC: Is the end goal of the TIC to classify all government networks (SIPR-like) or is it simply a more resilient network that is intended to function if the civilian internet is disabled? ((Yes this is an OMB Mandate but DHS is coordinating every aspect of the program so the line of questions related to TIC would be appropriate for DHS)). I love the requirement for SSL capture, decryption and monitoring - have the DHS lawyers provided any insight on how agencies can prepare and implement the appropriate policy at their levels? This requirement seems to go beyond the normal acceptable use and consent to monitoring statements covered in most policies today.
Some TIC References:
June 2008 Status Report
Mem 08-16 (TICAP CAPABILITY MATRIX) and the
TICAP (TIC Access Provider) requirements matrix (excel file).
3. Government Cooperation with Civilian Entities: While not completely useless the current collaboration is at best ineffective, cumbersome and slow. Those who wish us harm are certainly not restrained in their sharing of information or tools. The collaboration on the side of the “good guys” needs to be enhanced, what are the plans of DHS to enable better collaboration between the Public and Private sectors? How can we help DHS?
4. My Shoes: Personally this just bugs the crap out of me. I fly every week. I am a “Cleared, Registered traveler” so I do enjoy the benefits of shorter security lines at DCA, but still this effort to take off my shoes every time is getting very, very old. I probably catch 1-2 elderly folks a quarter as they fall trying to remove their shoes. There has to be a more effective way of monitoring for illicit substances in my shoes than an x-ray of millions of stinky shoes. I’m sure we can find a more reasonable method to enable airline travel to continue to have the perception of being “safe”.
He’d probably laugh at me during the conversation but at least I’d come prepared to talk as long as he’d hear me!
Rocky
Tuesday, September 30, 2008
Best Practices in Security Operations: Collection
.
Over the past few weeks, I’ve been giving a very similar presentation, to different audiences, when I’m asked to talk about the need for Collection across an enterprise. The attached diagram illustrates some of the flaws that come with relying on “point solutions” to provide enterprise visibility. Each technology has it’s inherent blind spots, but in conjunction with one another and a few other tools - namely Full Packet Capture, Log Management and SIEM you can provide your Detection team the ultimate information set for analysis and reduce your time to Incident Identification. This blog highlights some of the requirements for successfully establishing an effective Collection.
The overall model I’ve started to use more formally (Collection, Analysis, Escalation, Remediation, Reporting) is nearly the same as the Intelligence Model (Collection, Analysis, Dissemination) I used in my previous life, and is consistent with every other description I’ve found over the years. Richard Bejtlich has a fantastic CAER model he describes in a recent blog post. I recently used Richard’s recent blog entry to help justify this simplistic approach with several customers.
Collection: We have to ensure we have the right data available to the analysts, as efficiently as possible.
Context: This means we must understand what it is we are looking at and why we are looking at it. We must also understand the use of our networks/systems.
Vision/Clarity: We had to have clear access into the data, preferably the data is clean (void of as much garbage as possible). We must continually prune our systems in an operational mode (daily). Well-maintained systems are crucial to the success of the Detection Team.
Centralization: The moment you have to log in to more than one system to retrieve the log information, you’re fighting an uphill battle. Centralized Log Management is vital, brain dead simple, cost effective and hey… everyone else is doing it! There are tools out there that can help you centralize access to the logs, integrate with SIEM and collect data from your structured and unstructured data sets.
Automation: SIEM is your friend in this area. Much like an IDS you can generate “alerts” to notify you and speed up your incident identification timetable. With SIEM the “alerting” you get to enable is much more complex than with traditional IDS and you have built-in workflow to enable your Detection and Response Teams. In the end with SIEM you are making your life more efficient – as long as you plan for it and maintain it.
Data:: This seems simple… I’ll log x, y and z. In fact, introducing data sets should be the most complex part of your planning process. You must understand exactly why and how you will use the information made available to you before you tackle the task of bringing in the data and trying to analyze it. Easy example – trying to ask traditional network security analyst to look at custom financial application logs is going to hurt, a lot. Certainly you can look at that data, but you need Context to go along with the data. Security Point Solutions (IDS, FW, AV, Proxy etc) all have their place and should be monitored, but you will quickly find you’re fighting a losing battle if you rely solely on the information provided by these systems. The solution is a more holistic approach to seeing Network Traffic – Full Packet Capture. For one customer, we monitor multiple OC-12’s with Full Packet Data available for over 2 weeks and Meta Data available for 6+ Months. It is absolutely essential to reducing the time to incident identification, and helps us tune other sensors as well. Without the data – we were imaging drives to find confirmation of compromise/infection – these events are now trivial to verify with session reconstruction, we enable our Tier 1 to act much more quickly and with a much higher confidence level in their actions.
Full size image here:
Monday, July 14, 2008
SIEM Best Practices: Are you ready for correlation?
This post is a follow up to my previous posts related to SIEM Best Practices: “SIEM: Before you buy” , “SIEM: basic correlation and default content” and ”SIEM: Basic Success Criteria”. It is my hope that this information provides you with valuable insight into the best possible approaches for Log Management, Security Information, and Event Management and Security Operations for your organization.
Many organizations try their best to leverage Security Information and Event Management (SIEM or SIM) solutions in an attempt to drive a more proactive stance in monitoring their networks and/or systems for malicious traffic and alerting on intrusions. Unfortunately, there are many examples in which a company's expectations of what these solutions provide did not necessarily deliver the results as originally expected. Here are some reasons for this mismatch in expectations. Many other reasons exist; these are just two easy examples:1) For various reasons, it seems that Event Correlation Projects begin with a focus on the SIEM technology, as opposed to starting with the actual business needs. The SIEM vendors understand the general problem sets and their technology attempts to solve that problem much more efficiently than most customers who do not understand their own requirements. This creates an unbalanced environment before you even enter negotiations. During the sales cycle, there may be a "bake-off" or other vendor selection criteria that the customer goes through, but the actual requirements are not all that well understood. They may be too far in the future to add value or perhaps the vendor is too skilled at answering the question with flash and dazzle that the customer overlooks the real work that may be required in order to get to the solution in their production environment. The SIEM is one of many tools that can enhance your information security capabilities, but it is not the "silver bullet," nor is it simple and easy. The implications of a SIEM deployment need to be fully vetted well before you consider a purchase of one of these products. The clearer you are on your use of the system, the clearer the vendor capabilities become, and the easier it is to pick the right vendor(s) for your environment.
2) In other cases, the SIEM and/or Log Management purchase is driven by a compliance or governance activity versus being aligned with an overall enterprise approach to information security. This focus creates an environment where the customer fails to fully consider the end use of the data and therefore is not able to realize the most cost effective solution to meeting their requirements.
NOTE: If your first set of questions to the SIEM vendor is related to "Speeds and Feeds," you are having the wrong conversation. Stop now. Read carefully – if what I am describing still doesn't make sense call us now before buying your SIEM, and we'll explain it further.
The recommended approach: Identify Critical systems, Users, Customers and Event Sources
1) Identify the key sources of information, the key regulatory requirements, and the associated business-risk driven priorities. Classify the applications and map the associated business requirements/criticality.
2) Understand who is consuming the information being generated by the SIEM. Know how they will use this information and what problem it solves. (What is the value?)
3) Understand your SIEM users, the Incident Response, IT Infrastructure, Management, and Security Operations Teams. They may all have different, but critical, use-cases for the SIEM.
Log Management:
Develop and execute on a log consolidation and management program. Consider the following when planning an implementation of a Log Management solution.
Key Log Management Questions:
• What are the key event sources you can start with in phase one? What solutions can you deliver with this information based on your consumers needs?
• What information can you obtain (and deal with in a reasonable fashion) in phases two, three, and four?
The Benefits:
• You are in a better position to realize your organization's overall requirements
• Improve the processes for log review/analysis
• Increase your Incident Response effectiveness
• Immediately add value to audit and regulatory compliance efforts.
Identify and Consoidate Event Sources
The identification and consolidation of event sources (Log Management) will add significant value to the implementation of an event log correlation project.
The Benefits:
• The tools do what they are best at and you receive as much value as possible. The SIEM can focus on correlation and workflow, while the log management tool can focus on eating as much data as you can throw at it.
• Much of the hard work of obtaining the data is already accomplished by the log management tool.
• Storage costs are greatly reduced (bandwidth and hardware requirements are also highly likely to be significantly reduced)
• Technical architecture is more easily defined based on log management. The event sources are known as the events rates and the "value" of data being processed.
• You save time, energy, and money. Things get done right the first time, in the most efficient manner possible, without wasting time/resources/licensing.
Define use-cases for the information.
In order to deliver a solution based on the “use-case” you need to know as much as possible, including but not limited to:
• What event sources are required to provide context to the analyst and/or end-user?
• What log level is required for each event source?
• What asset information is required?
• What business context is required?
• You will need to understand standard traffic patterns to reduce false positives. This means you will need to understand the network/system/application/user and not just IDS events.
• You will know what you want to do with the output of the correlated scenario (is it informational only, reporting, alerting, etc?).
• What is the workflow from cradle to grave for the information being generated?
• How do I (and when do I) present information to the consumer, management, auditor, etc.?
The Benefits:
• You are now ready to start considering a SIEM. You will begin to show the real value of correlation for your environment for each defined use-case.
• You will save time and money on hardware, storage, processing, licensing, etc versus just buying the SIEM and figuring it out later.
Comments on the Technologies:
Log Management: In most cases, this technology should probably be considered a commodity: they collect logs, store them, and provide some sort of reporting functionality. Some do it better and faster and play well with others. While looking for differentiators in log management solutions, consider the event sources that go beyond the norm. What you incorporate into log management system in phase one may be simple, but phase four may reach way beyond the product's ability to provide value (ie: data may not be accessible in a meaningful manner). Look towards tools that can deal with new data with a minimum of parsing/coding/mapping and that can deal with both structured and unstructured data sets. Look for tools that can forward events to SIEM (if your requirements dictate the need for SIEM) in an efficient manner. This means they should be able to forward any subset of events that your requirements dictate in a flexible manner (not solely based on syslog priority or vendor defined category).
Here are some basic requirements the Log Management tools must be able to support:
• Handle current event sources, (the identified phase one event sources), in your environment.
• Handle any event source using common reporting methods (SNMP, SYSLOG, FILE READER, ODBC/JDBC, WMI, API/SDK)
• Parse data or display data in a meaningful way to the user/tools – not lumped into a blob. The data should be searchable by any field in the data.
• The system should provide meaningful reports on the data in the system
• You should be able to search across all, some, or one of the log management devices in your environment based on the use-case.
• Advanced User ACL's (you should be able to restrict functionality of the system and access to data by user/groups)
• You should be able to define a complex set of criteria and forward that information to the SIEM.
SIEM: The solution being both intuitive and flexible is the key. Look for event correlation tools that can grow with you in terms of correlative scenarios. Start with your exact correlative scenarios. Look for correlation tools that work with your anticipated workflow. IT Support, Security Operations, and Incident Response teams have different uses for these systems. You need to understand how each of them derives requirements. Purchasing based on price alone will leave you with a tool that, in reality, is no better than a good log management solution and some Perl scripts. Purchasing based on fancy features means you will overpay for something you'll never accomplish. Drive the vendors to illustrate how they meet your needs. If you can't articulate your requirements in a way that separates the lines between vendors, either hire a consultant to help you understand it better or wait to purchase the SIEM, it may save you hundreds of thousands of dollars in the end.
Here are some basic requirements SIEM tool must be able to support:
• Vulnerability Scanning Tools (Assets should be created and correlated against vulnerable/non-vulnerable systems)
• Advanced ACL's
• Analyst Workflow: External tool integration, Event "Marking" or notes for escalation, Case Management, and Integration with ticketing systems
• Reporting and Dashboarding
• Integration with Log Management devices (bi-directionally receive events and search events). Most accomplish this natively via SSL data transfer with their own solutions, via syslog, API/SDK, or other OS tools (FIFO, Log File) for other solutions.
• KEY: Flexible correlation – not just a pre-defined set of values; you should be able to correlate on any value maintained within the system. Correlation amongst standard and custom fields is the key to future expandability beyond the most basic use-cases.
• KEY: If you have already justified the need for a security analysis program, the use of the tool for these users is critical. Many have similar functionality that is only appreciated by the advanced user base. Make sure these users get hands-on and understand those subtle but time-saving/consuming differences. Most investigatory actions should be right-click driven or otherwise very intuitive in nature for the analyst.
• Analytical tools: statistical analysis, data mining, visualization tools can add significant value. Make sure you understand how you will use them in your operations before purchasing – there may be better alternatives out there.
Other relevant thoughts:
Event Sources (today and tomorrow): In our experience, we have seen many situations where the only event sources considered default to a standard set of network security monitoring devices. Though important, these devices are not always focused and prioritized according to business risks and business criticalities, nor do they always add the most correlative value. For example, firewalls are a useful data set, but depending on the configuration, location, and log levels, these devices may only provide a very limited value in terms of correlation of malicious traffic. The costs for transporting, processing, and storing these logs may or may not make sense in a SIEM product. Capturing the data and forwarding a subset of the data based on a specified use-case is be a better idea in most cases. This is where other technologies, like a Log Management Device in conjunction with a SIEM product, can add significant value if designed properly.
SIEM Staffing Requirements: Many organizations have the expectation that the SIEM will provide all necessary services related to information security and that their overall staffing requirements will be reduced. The truth is, if the SIEM is properly utilized, your staff will be extraordinarily busy in responding to newly identified security incidents that it did not previously have visibility into. A properly configured, tuned, and maintained SIEM will make your team more efficient, but will also increase the workload. A less than ideally maintained SIEM will increase your staffing requirements and workload and reduce the overall efficiency dramatically by forcing the team to respond to a multitude of activities that may or may not have a security impact. Consolidating log information provides a lot of raw resources that, traditionally, analysts become mired in for hours/days/weeks before they can find any single thing of value. Many network security analysts have not had the experience of looking at Oracle logs or other application data, and when presented with this data without context, it creates a very complex work environment. Having knowledgeable base information, defined workflows, escalation paths, and an understanding of maintaining the system reduces this complexity and allows the security team to focus on incidents.
Depending on the networks and systems you are protecting, there may be more useful sources of information, such as intrusion detection, web proxy logs, DNS and Email logs, VPN logs, operating system logs, application logs, database logs, directory services, and many other types of information sets that can add significant context and value into both a Log Management and SIEM solution. Failing to take into account these valuable sources of information can result in missed security incidents and will significantly reduce the ability of the security team to filter out false positives. The SIEM will require resources to maintain the content (Correlation Rules, Reports, etc), the system (patches, SP, Versions), and in certain circumstances the database (updates, tuning, etc). The latter, system/database, are not full-time resources, but should be accounted for in your planning. The content is constantly evolving and requires daily, if not hourly interaction – one or more resources should be considered to handle the load as one of their primary duties.
Your thoughts/questions/comments are welcome.
-Rocky
Tuesday, March 25, 2008
SIEM Best Practices: Basic Correlation and Default Content
Typically SIEM Vendor Use Case: Company “X’ wants to correlate on a near real time basis several stimuli and responses, say Network Traffic, IDS Signature(s) and Server/Application Response(s) and have it alert key personnel only when it matters most. With the right event sources, environmental context and log levels you can do that with a good SIEM based on Time, IP’s, Ports, Services, Vulnerabilities and/or other attributes of the related log entries.
What happens in the real world? As many of you know if you leave the default content enabled with your raw data flow - All hell breaks loose. You will see 10’s or 100’s of thousands of correlated activities on a daily basis. This is just a function of lab versus real life.
First, many organizations still don’t have what I would consider a robust IDS signature management program and the alerting from their current systems is ….. um….. well….. interesting. The really good IDS shops have lots of custom signatures that are categorized by the Vendors and won’t necessarily be included in default correlation rules.
Network traffic logging is inconsistent at best and finding out what systems/applications they have (not to mention how and what they log) is neither possible nor realistic without a fundamental change in the organization. Neither of the above is the fault of the SIEM Vendor or Product – it is simply the world we live in.
Yes this is all changing (very slowly) and certain compliance activities (PCI for example) can be used to help us to slowly drive those changes (if we are lucky enough to present a enterprise-wide security enhancement scenario to the exec team before a vendor pitches a product to simply “check the box”).
So then what value does SIEM correlation add? For now, I’ll just say that your mileage may vary, but you can control it! Several factors can add significant value to the data, location information, system information, business use information, vulnerability information, etc… To the Security Analyst the value of the information increases as more about the context of the data set, the targeted system and the overall environment in known. The less context the more work the analyst has to do to understand the alerts that are generated.
You can do the hard work or contextualizing (is that a word???) your environment as best you can for your SIEM ahead of time and say on top of it as the environment changes, or you can do the work with each event you have to analyze. The choice is yours depending on how you configure your SIEM. In the end the team that is prepared with the right ingredients at the beginning will enjoy SIEM life much more.
Other more technical rationale: SIEM Content (Correlation, Statistical Analysis, Reports) are after all memory, processor and/or query operations and consume resources in order to function and provide you with the end result. As minimal as those resources may be for each piece of content – in aggregate or across an enterprise your data will eventually overwhelm those resources if not managed properly. You can hurt yourself with bad content. Yes the good products are scalable, however the complexity with scalability is the topic of a future blog post.
SIEM Configuration Default Content Recommendations:
1. Know your environment. The better you understand your environment the better you can tune the tools you have to help you tame it. I use the concept of system and or network profiles to help capture that data into a knowledge base.
2. Disable default system content: Yes a lot of time and energy went into developing this content and in many cases it is highly useful. My recommendation is to use is as a template and/or a learning tool. Enable the content that you need and understand. If it doesn’t align to a documented workflow or feed a report – what exactly is it doing???
3. Document your known requirements and plan. There is nothing that says you can’t build content to solve tactical and strategic needs of your organization. I suggest a roadmap discussion and plan out your requirements accordingly. Some of these SIEM solutions provide 3 or more ways to accomplish essentially the same thing you can get lost easily.
4. If you don’t know – ask. There is an ever increasing number of SIEM users out there that have similar issues (perhaps with different technologies) but their approaches can be leveraged through Forums/Blogs/Linkedin/etc. Decurity will be providing more help in this area soon. So stay tuned.Monday, March 24, 2008
SIEM Best Practices: Before you buy
Knowledge of your Enterprise: This is the single most important factor to a successful SIEM deployment. IMHO, you simply cannot have a successfully deployed SIEM product without significant knowledge of your environment. Here is some of my rationale on this subject. As awesome as correlation can be (and it can be phenomenal) correlation can’t overcome lack of context. Correlation Rules work best and more efficiently if you can provide them with boundary conditions. The more focus and context you provide the more specific the results will be and the more automated your responses can be (in a word – efficiency).
Most anyone who has spent the time to hunt down this humble little blog post knows of Richard Bejtlich. In my mind, Richard is one of the most amazing minds in information security. If you haven’t read his books/blog please take the time and do so! Anyway… back to the topic…. In a recent blog post based on some recent conferences he attended, Richard boils down a few SIEM best practices into a few simple statements.
1. “Deploying a SIM requires understanding your network to begin with. You can’t deploy a SIM and expect to use it to learn how your network works.”
2. “You can’t use a SIM to reduce security staffing. Your staffing requirements will definitely increase once you begin to discover suspicious and malicious activity.”
3. “You can’t expect tier one analysts to be sufficient once a SIM is deployed. They still need to escalate to tier two and three analysts.”
As I also attended/facilitated this discussion with the Institute of Applied Network Security Forum in Feb I wanted to take this opportunity to provide some context to these statements from my personal perspective.
Knowledge of your Enterprise: This is the single most important factor to a successful SIEM deployment. IMHO, you simply cannot have a successfully deployed SIEM product without significant knowledge of your environment. Here is some of my rationale on this subject. As awesome as correlation can be (and it can be phenomenal) correlation can’t overcome lack of context. Correlation Rules work best and more efficiently if you can provide them with boundary conditions. The more focus and context you provide the more specific the results will be and the more automated your responses can be (in a word – efficiency).
In the real world what you get is the best effort of the security and/or project teams (not necessarily in coordination with one another) to lump in data to the SIEM. – Sometimes a “casserole surprise” works – and other times, well… there’s always Pizza.
Some Additional Best Practice Recommendations with regards to SIEM:
1. Understand what your requirements are for a SIEM product before you purchase and/or begin your implementation project. Log Management/Log Search and SIEM are certainly close relatives in certain aspects of their functionality but you must understand your needs before jumping into this water. A multi-million dollar mistake (Product, Hardware, Software, Consultants, Internal Team, etc for a large global enterprise deployment) can be slightly career limiting.
Note: If you are a Fortune 500 Organization – Consider what it would take to align your IT Security vision to across business units to leverage resources and funding to make this investment work for everyone. Many organizations I’ve seen over the past year are considering a similar model where one business unit acts as the Enterprise-wide Security Operations Center.
2. The Vendor does not know your business, how could they possibly know your business – wouldn’t they then be your partner or a competitor???? The vendor knows their product and they know some of the issues you face with regards to Compliance, Regulatory concerns and general Information Security. Your team needs to be the driving force for requirements of the SIEM and then eventual implementation of the SIEM product. The SIEM product should be well constructed and intuitive to the point where you should be able to mold it around your business requirements and not the other way around. If you find yourself asking the vendor for suggestions – refer to suggestion #4 below.
3. Focus Internal Resources in the right areas: I see organizations spending months of effort on designing architecture documentation on how the SIEM will talk to Source Devices, attempting to measuring assumed network bandwidth, projecting storage requirements and obtaining hardware – all before they decide what data is valuable to them. Rather than taking traditional sources (IDS/FW/VPN/OS/etc) as the “Gospel” look at your IT environment, your business’s use of IT and the direction of your organization and decide what event sources might offer more value. For example: Web Proxy may offer more context and value than Firewall in certain organizations. If storage costs are a limiting factor – spend the time to figure out what information is going to suit the needs of your information security program better. I’d rather you spend your time aligning asset management, data classification and prioritizing event sources and correlation scenarios than buying hardware/storage and planning against imaginary data sets.
4. Hire Experts: The vendors have great resources and you should considering using their services, but there are alternatives available that might be able to help you on a larger scale than just the best possible solution around their product(s). You have different needs of the products based on your intended usage of the SIEM and your overall environment. One-size may fit all but the extra cloth may become cumbersome…
Stay tuned there is a lot more to come on SIEM and Log Management!