Splunk Enterprise Security

sendmail transactions prior to data model ingestion in enterprise security

panovattack
Communicator

When using enterprise security protocol intelligence dashboards, how do you build a complete email transaction log (e.g. sourcetype=sendmail qid=* | transaction qid) prior to senndmail logs being pulled into the datamodels? Since single email 'transactions' are spread over numerous logs with same qid, it would be advantageous to build the complete email 'event' prior to populating the data models. Any thoughts?

0 Karma
1 Solution

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

View solution in original post

panovattack
Communicator

I found a work around to this, based on some of the Splunk documentation. I removed the email tag from the email logs directly. I then run a report that aggregates all the fields required by the Email datamodel and consolidate based on the session id. So, generally, this looks like:

"index=mta_syslog_log | stats values(email_datamodel_field) as email_datamodel_field by sid"

This report is then sent to a summary index (I've also accelerated it) and runs every 5 minutes. I then tag the summary index events will "email" and configure the ES CIM to include the summary index using "index=summary source=saved_report_name" as the constraint for an eventtype which assigns the tags.

This duplicates the some of the data on a summary index...but it vastly improved datamodel build performance. Not ideal, but it works.

panovattack
Communicator

I want to give this one a bump. It is becoming a point of significant frustration. Sendmail and email filter logs have fields like src_user, recipient, file_name, size, etc. spread across multiple events. The enterprise security dashboards expect all this data to be on one event that gets CIM normalized into one row of data. I built a transaction child node, which can't be accelerated, and also means I have to re-write a lot of ES email dashboards. Is there a practical and efficient way to build the transactions prior to normalization into the datamodel so I can benefit from the ES dashboards and acceleration? This must be a common problem for many customers using ES.

Get Updates on the Splunk Community!

.conf23 Registration is Now Open!

Time to toss the .conf-etti 🎉 —  .conf23 registration is open!   Join us in Las Vegas July 17-20 for ...

Don't wait! Accept the Mission Possible: Splunk Adoption Challenge Now and Win ...

Attention everyone! We have exciting news to share! We are recruiting new members for the Mission Possible: ...

Unify Your SecOps with Splunk Mission Control

In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission ...