Reporting

Postfix logs index time transaction

Communicator

I know it has been mentioned various times that transactions are the way to go when searching through postfix transaction logs with separate queue ID's, and joining all queue ID's together, but this can take an insane amount of time if I wanted to get a record of all email to and from a particular person for the past year (yes, I have been asked for this before).

Sep 13 23:55:41 mailhost1 postfix/qmgr[16069]: [ID 197553 mail.info] 44B09122E5: removed
Sep 13 23:55:42 mailhost1 postfix/smtpd[15824]: [ID 197553 mail.info] connect from example.com[1.1.1.1]
Sep 13 23:55:42 mailhost1 postfix/smtpd[15824]: [ID 197553 mail.info] 3042911BC9: client=example.com[1.1.1.1]
Sep 13 23:55:42 mailhost1 postfix/cleanup[15826]: [ID 197553 mail.info] 3042911BC9: message-id=<01010101010101@example.com>
Sep 13 23:55:42 mailhost1 postfix/qmgr[16069]: [ID 197553 mail.info] 3042911BC9: from=, size=8033, nrcpt=1 (queue active)
Sep 13 23:55:42 mailhost1 postfix/smtpd[15824]: [ID 197553 mail.info] disconnect from example.com[1.1.1.1]
Sep 13 23:55:42 mailhost1 postfix/cleanup[15835]: [ID 197553 mail.info] 4A870122E5: message-id=<01010101010101@example.com>
Sep 13 23:55:42 mailhost1 postfix/local[15812]: [ID 197553 mail.info] 3042911BC9: to=, relay=local, delay=0.16, delays=0.1/0/0/0.05, dsn=2.0.0, status=sent (forwarded as 4A870122E5)
Sep 13 23:55:42 mailhost1 postfix/qmgr[16069]: [ID 197553 mail.info] 4A870122E5: from=, size=8165, nrcpt=1 (queue active)
Sep 13 23:55:42 mailhost1 postfix/qmgr[16069]: [ID 197553 mail.info] 3042911BC9: removed
Sep 13 23:55:42 mailhost1 postfix/smtp[15718]: [ID 197553 mail.info] 4A870122E5: to=, relay=mymailserver.mydomain.com[192.168.1.2]:25, delay=0.08, delays=0.07/0/0/0, dsn=2.0.0, status=sent (250 Message accepted for delivery)

That's what the log looks like, I basically want a transaction based on the SMTP Transaction ID, except I would like it done at index time.
I've already played with the breakafter stuff in props.conf, but if a message sits in the queue for a long period of time, that won't work.

Any ideas, I know this is a pretty complex operation? If there is no good way to do it, I'm okay with that, just thought I'd ask.

Thanks,
--adam

Tags (3)
0 Karma
1 Solution

SplunkTrust
SplunkTrust

The "transaction" operation is a search language command. I don't think it's possible to use it as part of an index time transformation.

You might be able to accomplish something similar using summary indexing - basically, pre-compute your transactions in the background and store the results in a summary index. (I've not tried this, so there could be additional, unanticipated pain)

View solution in original post

SplunkTrust
SplunkTrust

The "transaction" operation is a search language command. I don't think it's possible to use it as part of an index time transformation.

You might be able to accomplish something similar using summary indexing - basically, pre-compute your transactions in the background and store the results in a summary index. (I've not tried this, so there could be additional, unanticipated pain)

View solution in original post

Communicator

I'm sensing an insurmountable amount of pain from using summary indexing to accomplish this, plus I was wondering if it was something simple and I just hadn't realized it yet.

Thanks!

0 Karma

Splunk Employee
Splunk Employee

You can do this. One limitation is that sourcetype, source and host won't be preserved with the out of the box summarization, but that shouldn't matter for this type of use case much, and it is possible to work around. You will possibly also run into a bug/limitation on only having 10,000 results when using the "checkbox" summary indexing, so you may have to work around that by invoking the collect command directly in your search.