Hello, Splunk experts,
I have a very big raw data, and need to pass the different rules. For example: query1: index=abc, sourcetype=xyz data=raw|rule1,rule2...ruleN and another query2 is ndex=abc, sourcetype=xyz data=raw|ruleN+1,ruleN+2...ruleN+M....
the raw data is same, but rules are different . If I ran this 2 queries, how can I share same raw data in memory and don't need to load 2 times of the big data. Any solution for this?
You may be taking the wrong approach. What problem are you trying to solve that requires uploading the same block of data twice? Once you load your data into Splunk you can search it as many times as you like.
What do you mean by "rule"? That's not a Splunk term.
Sorry, maybe I didn't explain clearly. I have a huge raw data, and huge rules which is Splunk term. Because the big numbers, I cannot run one query to pass everything due to Splunk limitation. So I split the raw data into some groups, like RAW1, RAW2.... and also split rules into some groups: RULE1, RULE2... Then I made a query like RAW1+RULE1, RAW1+RULE2....(all of them in Splunk term). The one rule example: |eval exception=if(exception="" AND Rep = "1" AND element = "abc" AND comments IN("abc1","abc2") AND band = "xxx" AND value != "-1", "special",exception). every RULE1 have a big numbers of this kind of rule. I just want to know if I can make RAW1 in the memoery, then I can just pass through different RULE groups to avoid load RAW1 many times.
Is it any way in Splunk to do the for loop nesting. Above requirement likes:
for raw in (raw1, raw2, raw3, raw4):
for rule in (rule1,rule2,rule3,rule4,rule5,rule6):