My question is more about what methodology would be appropriate given the use case I am trying to use Splunk for.
The business use case in question involves tracking a particular piece on "content" through a series of different systems. The data on hand is logs from each of these systems. In my perspective, the "content" piece is processed through a series of states until it reaches a success state: a success log in the final system. Essentially the transaction flow is a finite state machine where if the content is treated normally it will follow a normative set of states (each state would be represented by a log line in corresponding system).
I am trying to use Splunk to come up with real time reporting on "content" as it flows through all the subsystems and alert based on whether it is taking too long on each step. There are defined SLAs for each step that somehow have to be incorporated into my splunk searches.
Does any here have experience with utilizing Splunk to track a pretty long transaction flow (i.e ~10 different subsystems)? The only way I can thing of doing this feasibly right now it to chunk each state transition into its own search query and validate that that transition has been successful. However the requirement is to create and overall report that shows 'all content pieces as they are progressing through the business flow'.
To give a little more information, each log file has primary ids for the content piece that can be tied to an id in the previous system's log. I am assuming that trying to execute some sort of join between 10 different log files it not an appropriate use case for Splunk.
Thanks for your help!
I assume you have checked out the transaction
command, right?
yes I think transaction would not suit this use case as it deals with events coming from a single log file. I am talking about chaining data together from multiple log files to produce a report
No, transaction
has no such limitation. Give it a try:
index=A OR index=B OR index=C | transaction host, user ...