Splunk Search

Building new events based on transactions at index time

acidkewpie
Path Finder

Howdy,

I've a load balancer which is happily sending event logs when certain events happen in a web app flow. It will send a log when an HTTP request is made, when the server responds, when it serves from cache, when it fails a security violation, when it can't reach the server it needed to etc. My goal is to dump rich and "stateless" data about each event, and deal with it later. I want to build the logging logic in the logging systems, not in the load balancer, as for one reason, it's much much easier to change out of band systems like a logging server then in band network architecture.

This later would be, in one example, building Apache Combined style logs to make visible to those who know and understand that log format. I could do this dynamically using a transaction search (and by inserting a random tracking ID into each log message etc.) but this is slow and pretty intensive. Given that I know I want one final log entry to reflect the life of every single web request made, is there a way to build these new log entries dynamically at index time? I am pretty sure I could trivially schedule a cron job to search and re log the results it creates of the previous 5 minutes etc, but I'm wondering if there's a slick integrated way to do this instead where upon getting one of the various responses to an http request (response, cache, violation, failure) I can quickly find the matching incoming request and log it nicely in one entry.

Thanks

0 Karma
1 Solution

Ayn
Legend

No, you cannot build transactions at index-time. That would require Splunk to hold off indexing events while it waits for some transaction termination condition to match. This could potentially be very resource intensive (not to mention the difficulties of redesigning the indexing pipeline for supporting something like this).

I guess one option if you want to have the events combined and ready would be to do what you're already considering - schedule a search that grabs the events, builds transactions out of them and writes the results to another index using the collect command.

View solution in original post

Ayn
Legend

No, you cannot build transactions at index-time. That would require Splunk to hold off indexing events while it waits for some transaction termination condition to match. This could potentially be very resource intensive (not to mention the difficulties of redesigning the indexing pipeline for supporting something like this).

I guess one option if you want to have the events combined and ready would be to do what you're already considering - schedule a search that grabs the events, builds transactions out of them and writes the results to another index using the collect command.

acidkewpie
Path Finder

Actually, is there any perspective abuot doing this as an rtsearch instead of a schedule? It looks to me that to main problem is that as a rtsearch there's no built in mechanism to run it constantly.

For reference I've a vague search that looks like this...

'http_id=* | transaction http_id maxspan=2m | eval timestamp=strftime(_time, "%d/%m/%Y %H:%M:%S %z") | strcat client_ip " - - [" timestamp "] " http_method " " http_path " HTTP/" http_version " " http_status _raw| fields _raw | collect index=access_logs'

0 Karma

acidkewpie
Path Finder

I guess that's fairly straight forward and neat. The end users can wait a few minutes I expect, thanks.

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...