Splunk Enterprise Security

sendmail transactions prior to data model ingestion in enterprise security

panovattack
Communicator

When using enterprise security protocol intelligence dashboards, how do you build a complete email transaction log (e.g. sourcetype=sendmail qid=* | transaction qid) prior to senndmail logs being pulled into the datamodels? Since single email 'transactions' are spread over numerous logs with same qid, it would be advantageous to build the complete email 'event' prior to populating the data models. Any thoughts?

0 Karma
1 Solution

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

View solution in original post

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

panovattack
Communicator

I want to give this one a bump. It is becoming a point of significant frustration. Sendmail and email filter logs have fields like src_user, recipient, file_name, size, etc. spread across multiple events. The enterprise security dashboards expect all this data to be on one event that gets CIM normalized into one row of data. I built a transaction child node, which can't be accelerated, and also means I have to re-write a lot of ES email dashboards. Is there a practical and efficient way to build the transactions prior to normalization into the datamodel so I can benefit from the ES dashboards and acceleration? This must be a common problem for many customers using ES.

Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...