The search will be queues for a few seconds, in case a current jobs finishes.
Then if no search concurrency ressource is available, the search job will be skipped, and the warning published in the messages, and in the logs.
... View more
The problem is that until Splunk has checked all events, it can't know if it has found all the values of Pack, so it has to continue matching. If you know how many values of Pack exist, you can add a |head <N> after your stats command to make the search stop once it's found N values for Pack.
... View more
Hello linu1988.
Thank you for the reply.
Since calling thousands of boolean expression causes the search performance extremely slow, I have decided to use the lookup by
calling in every join clause.
Thanks for the advice!
... View more
Honor Sir Alacer! ... and now we sing...
We're Knights of the stats Table
We dance whene'er we're able
We do subsearches and chart scenes
With footwork impeccable ....
... View more
For filtering a search based on a different search's results your first approach usually is best.
Let's make up a realistic example: You have events that form a transaction with some transaction_id ... somewhere down the line of that transaction there is a user field, and you want to grab the transactions for user=yuwtennis .
A slow search would go like this:
sourcetype=transactions | transaction transaction_id | search user=yuwtennis
That'll build ALL the transactions and then throw out most of them.
Pre-filtering like this doesn't work if the user field isn't present in every event:
sourcetype=transactions user=yuwtennis | transaction transaction_id
So you'll have to pick out the transaction_id values you need before you build the transaction:
sourcetype=transaction [search sourcetype=transaction user=yuwtennis | dedup transaction_id | fields transaction_id] | transaction transaction_id
That will take a bit more time due to running two searches, but will almost always be miles faster than the first naïve search.
Your workaround #1 looks slow because joining will always be very slow compared to filtering before loading events.
Your workaround #2 is probably going to be worse when as you say there may be thousands of values returned from the subsearch, so the map would have to run thousands of searches - that can't be fast.
... View more
Great! Using stats and sort is what I was thinking. Another command you should check out is "head". It will return the the most recent n events (i.e. | head n). But glad you got your search to work the way you wanted. You should mark it as "answered" if you have no more issues.
... View more
Not quite sure about a thousand... map will run a search for each event it's given, so it would run a thousand searches. By default it stops after ten, so running a thousand may or may not break things. Just give it a shot...
While you're studying map , take a look at localize - maybe that could simplify your subsearch for building the timeranges.
... View more
You could hack your way into conditional execution with map :
some search that returns either one or zero results | map search="the other search"
map will run "the other search" for each event returned by "some search...", so if "some search..." returns one event it will run "the other search" once and if it returns zero events it will not run "the other search" at all.
... View more
Use addinfo . See the docs on both collect and addinfo .
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Addinfo
... View more
Report acceleration might make more sense for you as it can handle late arriving events. See this doc:
http://docs.splunk.com/Documentation/Splunk/5.0.5/Knowledge/Aboutsummaryindexing
... View more
I seem to get more than six decimals... when you run this
| stats count as foo | eval foo = 1000000.0001 | eval bar = foo / 10000000
do you get a value for bar other than 0.10000000001 ?
... View more
Hello appleman.
Thanks for taking your time. I appreciate your time.
My plan is to specify "All time" in search and
specify flexible time from the time ticker.
Goal is to fetch all the event from index for specific sub search.
I will take a look as well.
... View more
That controls if the sourcetype is visible in the pulldown when creating an input using "From List" or when using the data preview page.
It'd make sense to disobey the docs and set that to 1 if you create a sourcetype in the config files, but want to allow users to select that sourcetype through the regular UI.
... View more
hi
yes this is possible just deload app splunk-6x-dashboard-examples_201 there is an example that is called Time Range Picker Within Panels that show indivdual time range pick on dasboard i think it may you
thank's
... View more
My point was more that join is an expensive operation, computation wise. Perhaps you can reach the same results with transaction or stats . But it all depends on what your data looks like, and what you want out of it.
I believe that if you run the search interactively, all searches and subsearches will use the time limits yu setin the drop-down 'time picker' menu, unless you specify different.
/k
... View more
Easy enough, just use sub-searches in your search string. There is no real reason to create a temp index, you are just adding another failure point.
... View more