Splunk Search

Monthly distribution of query results

jbrenner
Path Finder

Hello,

I have the following query which gives me the percentage of successful orders for the time period selected in the drop-down:

index=abc "search string 1" | STATS COUNT AS ATTEMPTED_ORDERS | appendcols [search index=abc "search string 2" | STATS COUNT AS SUCCESSFUL_ORDERS] | eval PERCENT_SUCCESSFUL = (SUCCESSFUL_ORDERS/ATTEMPTED_ORDERS) * 100 | TABLE PERCENT_SUCCESSFUL

I would like to use the timechart keyword to obtain this value (PERCENT_SUCCESSFUL) for each day over the last month, so I can display it as a chart.

Thanks!
Jonathan

0 Karma
1 Solution

DalJeanis
Legend

Try this -

| multisearch 
    [search index=abc "search string 1"
     | eval  Attempts = 1
     | fields Attempts 
    ] 
    [search index=abc "search string 2"
     | eval Successes = 1
     | fields Successes 
    ]
| bin _time span=1d
| stats sum(*) as * by _time
| eval Attempts=if(Successes>Attempts, Successes, Attempts)
| eval Percent_Successful = round(100*Successes/Attempts,0)
| timechart span=1d max(Percent_Successful) 

View solution in original post

0 Karma

woodcock
Esteemed Legend

A more efficient answer (all in 1 search) is this:

index=abc "search string 1" OR "search string 2"
| bin _time span=1d
| stats count(eval(searchmatch("search string 1"))) AS ATTEMPTED_ORDERS
        count(eval(searchmatch("search string 2"))) AS SUCCESSFUL_ORDERS BY _time
| eval PERCENT_SUCCESSFUL = (coalesce(SUCCESSFUL_ORDERS, 0)/coalesce(ATTEMPTED_ORDERS, SUCCESSFUL_ORDERS)) * 100
| table _time PERCENT_SUCCESSFUL
0 Karma

DalJeanis
Legend

Try this -

| multisearch 
    [search index=abc "search string 1"
     | eval  Attempts = 1
     | fields Attempts 
    ] 
    [search index=abc "search string 2"
     | eval Successes = 1
     | fields Successes 
    ]
| bin _time span=1d
| stats sum(*) as * by _time
| eval Attempts=if(Successes>Attempts, Successes, Attempts)
| eval Percent_Successful = round(100*Successes/Attempts,0)
| timechart span=1d max(Percent_Successful) 
0 Karma

jbrenner
Path Finder

Hi Woodcock,

Thanks for the answer.
When I run your query, I get the following error:

Error in 'bin' command: Invalid argument: 'span'

Thanks!
Jonathan

0 Karma

woodcock
Esteemed Legend

Formatting error; get rid of the spaces around the equals-sign span=1d. If updated my answer to fix this.

0 Karma

woodcock
Esteemed Legend

Sorry @DalJeanis; I didn't check to see if OP was right about his attribution to me (he wasn't) until after I edited your answer. I also pulled the bin out of each search.

0 Karma

DalJeanis
Legend

@woodcock - How DARE you fix my sloppy code?

The NERVE of some people.

Heh.

0 Karma

jbrenner
Path Finder

Sorry for the confusion. Thanks, @DalJeanis. Your solution works!

0 Karma

woodcock
Esteemed Legend

Try this:

index=abc "search string 1"
| timechart span=1d count AS ATTEMPTED_ORDERS
| appendcols [search
   index=abc "search string 2" 
   | timechart span=1d count AS SUCCESSFUL_ORDERS]
| stats values(*) AS * BY _time
| eval PERCENT_SUCCESSFUL = (coalesce(SUCCESSFUL_ORDERS, 0)/coalesce(ATTEMPTED_ORDERS, SUCCESSFUL_ORDERS)) * 100
| table _time PERCENT_SUCCESSFUL
0 Karma

DalJeanis
Legend

Hey, under what circumstances is timechart ensured of coming up with the same start date for those two (sub) searches? I'm thinking that if there are no "search string 2" on the first day that there is a "search string 1", that the records will be matched up wrong.

0 Karma

woodcock
Esteemed Legend

Good point, I have updated my answer.

0 Karma

sbbadri
Motivator

try below

index=abc earliest=-30d@d latest=now "search string 1" | STATS COUNT AS ATTEMPTED_ORDERS | appendcols [search index=abc earliest=-30d@d latest=now "search string 2" | STATS COUNT AS SUCCESSFUL_ORDERS] | eval PERCENT_SUCCESSFUL = (SUCCESSFUL_ORDERS/ATTEMPTED_ORDERS) * 100 | timechart span=1d count by PERCENT_SUCCESSFUL

0 Karma

DalJeanis
Legend

@sbbadri - Not a bad try, but that's not going to work because the initial stats command has a single result, not one result per day, so there is no field _time remaining to go into the timechart command.

Go ahead and try again, you're on the right track. (no peeking at my version or @woodcock's version, they are not perfect so there's plenty of room for you to do better than us.

Also, when you change your strategy to fix the _time issue, I would suggest avoiding appendcols since it will match up the records based on their order rather than based on their dates. You'll need to bin the _time to days match them up by _time either with a stats command or a join.

I said "don't peek at woodcock's."

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...