This is crazy but you could use the ouputlookup
command to use the KV store
as a registry of sorts like this:
Write your initial lookup :
index=* | eval HardCodedKey= 0 | stats first(HardCodedKey) AS HardCodedKey BY host | dedup HardCodedKey | eval NextSearchString="Your Inital or Default Search Here" | stats count by HardCodedKey, NextSearchString| outputlookup MyLookup
Then your scheduled search could do something like this (and also something like above, to refresh the registry for the next run):
index=* | eval HardCodedKey= 0 | stats first(HardCodedKey) AS HardCodedKey BY host | dedup HardCodedKey | lookup MyLookup HardCodedKey OUTPUT NextSearchString | map search="Some Search Stuff $NextSearchString$"
Hi Woodcock,
Any other way to achieve this use case other than using map..
Any equivalent to map without 10K limit???
Thank you woodcock,
For showing unknown approach 🙂
But still map got limit of 10K (due to Sub Search limit) :)..
I know that if am using SDKs I can accomplish inside commands but I don't have SDK.
I think what you may need is a macro
:
http://docs.splunk.com/Documentation/Splunk/latest/admin/macrosconf
Hmm I tried that you cant return a macro name with parameters using return statement.
What I meant was that maybe you could abandon your current approach and start over using a macro
-based approach.
Hi,
Use case is like,
Based on what sort of conditions and what sort of modifications do you need to make? Query replacement tokens and subsearches might be helpful here but a complete answer requires more detail as to what you have and are trying to accomplish. The docs on savedsearch gives a hint at using string substitution for replacement tokens
For Example,
I want to have a single saved search query which will get executed for every 1hr and it will be processing the particular batch of records.
And for each batch query logic should be modifed dynamically.
Can we accomplish this using custom command? Can we invoke another query inside custom command without using Splunk Python SDK?
How do you identify "the particular batch of records" to process? Is it just events from the last hour or is that variable each hour? Are you processing multiple batches each hour? By "batch query logic should be modified dynamically" What determines the dynamics of this query? Something in how the search is launched? Something in the data of the batch? Something else?
Another potential thought is a combination of gentimes and map but without understanding what your goal is and what the data looks like it's still just a guess in the dark.