Dear Splunk community,
I have a Python application that pushes data to Splunk every time is executed. Multiple events are pushed using JSON format. Only a subset of the data being sent, namely two fields are changing during job execution, the rest are constant per job execution (think of them as some sort of job metadata). I would like to have that metadata in splunk so I can filter it, but I do not like also pushing lots of identical data for each event. I guess what I am looking for is some sort of bulk tagging after each import where each job metadata field would be a label.
I appreciate any thoughts/suggestions how to do this usinng splunk BKMs.
You probably could associate some INGEST_EVAL settings with the appropriate sourcetype or source so Splunk will automatically add fields to the events, but it's far easier to have the Python app continue to do it.
You probably could associate some INGEST_EVAL settings with the appropriate sourcetype or source so Splunk will automatically add fields to the events, but it's far easier to have the Python app continue to do it.
If I use INGEST_EVAL, is Splunk going to literally add those fields to each event, or is it going to do some internal JOIN? What I want to avoid is the fact that some metadata fields like `a-quite-long-string-that-really-does-not-change` is actually copied to each event.
Yes, INGEST_EVAL adds the result as a field to each event.
Bear in mind that any field that is NOT in an event cannot be used as a filter on that event.
thank you. it seems I have no other choice than adding the metadata in the events.