Splunk Enterprise Security

sendmail transactions prior to data model ingestion in enterprise security

panovattack
Communicator

When using enterprise security protocol intelligence dashboards, how do you build a complete email transaction log (e.g. sourcetype=sendmail qid=* | transaction qid) prior to senndmail logs being pulled into the datamodels? Since single email 'transactions' are spread over numerous logs with same qid, it would be advantageous to build the complete email 'event' prior to populating the data models. Any thoughts?

0 Karma
1 Solution

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

View solution in original post

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

panovattack
Communicator

I want to give this one a bump. It is becoming a point of significant frustration. Sendmail and email filter logs have fields like src_user, recipient, file_name, size, etc. spread across multiple events. The enterprise security dashboards expect all this data to be on one event that gets CIM normalized into one row of data. I built a transaction child node, which can't be accelerated, and also means I have to re-write a lot of ES email dashboards. Is there a practical and efficient way to build the transactions prior to normalization into the datamodel so I can benefit from the ES dashboards and acceleration? This must be a common problem for many customers using ES.

Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...