I have a very longs logs that I would like to filter before indexing/ I have some patterns that interesting me, however, I don't think that using props.conf and transforms.conf will help me because I want only this event, and they are very different in the strings. I thought to use a bash script that excludes only these events, and I create one that works well.
Currently, I uploading the logs and in the future, I will send them form Amazon S3 cloud.
I have 2 questions:
1. How I define this script in Splunk? how does it know form which folder takes the log and which script to run?
2. If 1 is not possible, maybe using transforms.conf regex is possible?
for example, these are the only logs pattern that I want to be uploaded:
ShutdownThread: shutdown reason is: userrequested
XCMP : Tx: PUI broadcast source:0x1 type:0x0 id:0x1 state:0x1
XCMP : Tx: PUI broadcast source:0x1 type:0x0 id:0x1 state:0x0
CAP:MediaRecorder: Start video recording
CAP:MediaRecorder: Stop video recording
(I have like 10 different events with ON/OFF string)
Define the script as a scripted input (Settings->Data inputs->Scripts). Anything the script writes to stdout will be indexed by Splunk. There are no arguments passed to the script so it must have its own logic to know which log to read.
Yes, you can use transforms.conf. Use a REGEX statement to identify the events you want to index and send them to FORMAT = indexQueue; send the rest to FORMAT = nullQueue.
--- If this reply helps you, an upvote would be appreciated.