Splunk Search

Results of two searches displayed on one chart

rlautman
Path Finder

I'm trying to take the results of 2 searches that are each searching a different index and display on one table to compare requests for orders with actual orders placed, my search looks something like this:

index=A product=inA | stats count(UniqueID) as Requests | append [search index=B order="BuyProduct" | stats count(UniqueID) as OrdersPlaced]

I want to list the results by customer with the highest number of requests first. I've tried using top and another stats command but I am getting results with Requests on one row and OrdersPlaced on the row below, when I try to use top no results are found.

Any help on this would be appreciated

Tags (1)
1 Solution

bmacias84
Champion

Depending on what your going for you could use appendcols,selfjoin, or join or perform an eval statment combining two searches.

using appendcols:


index=A product=inA | stats count(UniqueID) as Requests | appendcols [search index=B order="BuyProduct" | stats count(UniqueID) as OrdersPlaced]

using join:


index=A product=inA | stats count(UniqueID) as Requests by _time | join _time [search index=B order="BuyProduct" | stats count(UniqueID) as OrdersPlaced by _time]

using eval:


index=A OR index=B AND product=inA OR order="BuyProduct" | eval Requests=if(product==inA,Requests+1,Requests) | eval OrdersPlaced =if(order=="BuyProduct",OrdersPlaced+1,OrdersPlaced)| stats count(OrdersPlaced) as OrdersPlaced, count(Requests) as Requests

Hope this helps or gets you started. Play around with these ideas. If this does help dont forget to vote up or accept the answer.

Cheers,

View solution in original post

asurace
Engager

I did it a bit different:
index=syslog app=myapp env=pr | search pushed |eval combo="PutMetric" + "_" + role + "_" + dc| timechart count by combo |appendcols [ search app=myapp env=pr Throttling | eval THROTTLE = "THROTTLING" + "_" + role + "_" + dc | timechart count by THROTTLE ]

In this way I get the THROTTLE eval overlayed to the combo eval.

I used it to find out how many boto requests were throttled by AWS.

0 Karma

bmacias84
Champion

Depending on what your going for you could use appendcols,selfjoin, or join or perform an eval statment combining two searches.

using appendcols:


index=A product=inA | stats count(UniqueID) as Requests | appendcols [search index=B order="BuyProduct" | stats count(UniqueID) as OrdersPlaced]

using join:


index=A product=inA | stats count(UniqueID) as Requests by _time | join _time [search index=B order="BuyProduct" | stats count(UniqueID) as OrdersPlaced by _time]

using eval:


index=A OR index=B AND product=inA OR order="BuyProduct" | eval Requests=if(product==inA,Requests+1,Requests) | eval OrdersPlaced =if(order=="BuyProduct",OrdersPlaced+1,OrdersPlaced)| stats count(OrdersPlaced) as OrdersPlaced, count(Requests) as Requests

Hope this helps or gets you started. Play around with these ideas. If this does help dont forget to vote up or accept the answer.

Cheers,

mchrisman
New Member

Neither "join" nor "appendcols" work correctly if there are times* that contain an event of the first type but not of the other type. If using "join", those times will be completely skipped. If using "appendcols", because the columns get filled in with no gaps, some of the data can get shifted into the wrong time zone.

Using eval seems to work correctly.

*Or time periods, if you're using "|bin _time"

0 Karma

rlautman
Path Finder

The appendcols search has worked - by just using append each search was taking a new row in my table - thanks 🙂

martin_mueller
SplunkTrust
SplunkTrust

What does the result you are looking for look like as an example?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...