Activity Feed
- Karma Re: Is there any optimal way to get context bith before and after fir search result for fredclown. 09-06-2024 11:29 AM
- Got Karma for Re: how to create pivot like table. 06-05-2020 12:50 AM
- Got Karma for Re: how to create pivot like table. 06-05-2020 12:50 AM
- Got Karma for Re: calculate percentage of multiple columns and then create a chart. 06-05-2020 12:49 AM
- Posted Re: how to create pivot like table on Reporting. 12-13-2019 04:02 PM
- Posted Re: how to create pivot like table on Reporting. 12-13-2019 03:20 PM
- Posted Re: how to create pivot like table on Reporting. 12-13-2019 02:41 PM
- Posted how to create pivot like table on Reporting. 12-12-2019 04:59 PM
- Posted Re: multiple joins and subsearch question on Splunk Search. 03-14-2019 07:10 PM
- Posted multiple joins and subsearch question on Splunk Search. 03-12-2019 06:50 PM
- Tagged multiple joins and subsearch question on Splunk Search. 03-12-2019 06:50 PM
- Posted count events where the same value exists in two different fields on Splunk Search. 03-01-2019 07:49 PM
- Tagged count events where the same value exists in two different fields on Splunk Search. 03-01-2019 07:49 PM
- Posted Re: eval output is incorrect when comparing two fields with numeric values on Getting Data In. 02-22-2019 04:49 PM
- Posted eval output is incorrect when comparing two fields with numeric values on Getting Data In. 02-15-2019 01:30 AM
- Tagged eval output is incorrect when comparing two fields with numeric values on Getting Data In. 02-15-2019 01:30 AM
- Posted Re: Can you help me compare two fields with numeric values? on Splunk Search. 02-14-2019 12:15 AM
- Posted Can you help me compare two fields with numeric values? on Splunk Search. 02-13-2019 09:16 PM
- Tagged Can you help me compare two fields with numeric values? on Splunk Search. 02-13-2019 09:16 PM
- Tagged Can you help me compare two fields with numeric values? on Splunk Search. 02-13-2019 09:16 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
12-13-2019
04:02 PM
1 Karma
hi @woodcock ! I am embarrassed to ask but what is UpVote and how do I do it? 🙂
... View more
12-13-2019
03:20 PM
1 Karma
hi @to4kawa !! Thanks for taking time to look at my question! Your solution also works. Like you said, there are several possible answers. woodcock also had a suggestion that is somewhat similar to your but he used xyseries and kv.
Learned a few things in this post which I am really thankful for! 🙂
... View more
12-13-2019
02:41 PM
Hi @woodcock ! Thank you for taking time to look over my question. Really appreciate it! I like posting question here in Splunk Answers as I always learn neat tips and tricks that I would not have thought of!
I tried your query and the output after the kv command only produced ExtractStart that only is blank while ExtractEnd only had the date - the time portion was dropped.
After looking around in the internet, I tweaked the solution you gave me to use 'extract' command instead of KV. I also added a | as the delimiter in the concatenation so that I can split the 2 dates more easily:
| eval raw = "ExtractStart=" . ExtractStart . " | "."ExtractEnd=" . ExtractEnd
| fields - Extract*
| xyseries raw CM CallCount
| fillnull value=0
| rename raw AS _raw
| extract pairdelim="|" kvdelim="="
| fields - _raw
| table ExtractStart ExtractEnd *
Still curious to see how to make the kv thing work if you don't mind sharing. 🙂
... View more
12-12-2019
04:59 PM
I have an SPL query that produces a table output like this:
ExtractStart ExtractEnd CM CallCount
12/12/2019 4:00 12/12/2019 4:15 CM2 55
12/12/2019 4:00 12/12/2019 4:15 CE1 0
12/12/2019 4:00 12/12/2019 4:15 CE3 28
12/12/2019 4:00 12/12/2019 4:15 CM4 32
12/12/2019 4:15 12/12/2019 4:30 CM2 192
12/12/2019 4:15 12/12/2019 4:30 CE1 0
12/12/2019 4:15 12/12/2019 4:30 CE3 95
12/12/2019 4:15 12/12/2019 4:30 CM1 157
12/12/2019 4:30 12/12/2019 4:45 CM2 197
12/12/2019 4:30 12/12/2019 4:45 CE1 0
12/12/2019 4:30 12/12/2019 4:45 CE3 92
12/12/2019 4:30 12/12/2019 4:45 CM4 106
12/12/2019 4:45 12/12/2019 5:00 CM2 208
12/12/2019 4:45 12/12/2019 5:00 CE1 0
12/12/2019 4:45 12/12/2019 5:00 CE3 112
The query that produced the table above is this:
index=extract sourcetype=jobsummaries
| rex field=jobName "(?:\w+\s)?(?<CM>\w+)"
| rex field=rangeStart "(?<TS1>\d+-\d+-\d+\s\d+:\d+:\d+)\..+"
| rex field=rangeEnd "(?<TS2>\d+-\d+-\d+\s\d+:\d+:\d+)\..+"
| eval ExtractStart = strptime(TS1, "%Y-%m-%d %H:%M")
| eval ExtractStart = strftime(ExtractStart, "%Y-%m-%d %H:%M")
| eval ExtractEnd = strptime(TS2, "%Y-%m-%d %H:%M")
| eval ExtractEnd = strftime(ExtractEnd, "%Y-%m-%d %H:%M")
| eval today = relative_time(now(),"-0d@d")
| eval today = strftime(today, "%Y-%m-%d %H:%M")
| where ExtractStart >= today
| rename processedNewCalls as CallCount
| table execId ExtractStart ExtractEnd CM CallCount
| dedup execId
| fields - execId
| sort ExtractStart
How do I modify my query so that my table looks like this?
ExtractStart ExtractEnd CM1 CM2 CM3 CE1 CE3
12/12/2019 4:00 12/12/2019 4:15 0 55 32 0 28
12/12/2019 4:15 12/12/2019 4:30 157 192 0 0 95
12/12/2019 4:30 12/12/2019 4:45 0 197 106 0 92
12/12/2019 4:45 12/12/2019 5:00 0 208 0 0 112
... View more
03-14-2019
07:10 PM
Hi MuS,
Thanks for the tip. I have seen that link about that subsearches and I actually referred to it before several times before=) But I am still somewhat stuck on how to convert my query using subsearch/join (which is actually useless now because I am hitting some splunk limit so I am not even getting full data for one day).
Below is my query so far. I am able to see the 4 columns that I need from eh_event=SIP_RESPONSE and all the RTCP_MESSAGE columns (minus the ones I explicitly dropped). But my data is now just one row with multiple columns instead of being broken down into multiple rows. Do I need to do an mvexpand on each field? ProbIem is I have at least 40 columns from RTCP_MESSAGE alone.
index=ehop_voip sourcetype=VOIP (eh_event=RTCP_MESSAGE fractionLost>128 callId=*) OR (eh_event=SIP_RESPONSE method=BYE callId=*)
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| foreach * [eval "{eh_event}_<<MATCHSTR>>"=<<MATCHSTR>>]
| rex field=senderAddr "xx\.(?<range>\d{1,3})\.\d{1,3}\.\d{1,3}"
| eval isCC=if(cidrmatch("xx.xx.0.0/16" ,senderAddr),1,0)
| eval isCC2=if(cidrmatch("xx.xx.0.0/16" ,senderAddr),1,0)
| fields RTCP* isCC isCC2 SIP_RESPONSE-Time SIP_RESPONSE_clientAddr SIP_RESPONSE_serverAddr SIP_RESPONSE_callId
| fields - RTCP_MESSAGE_date* RTCP_MESSAGE_clientName RTCP_MESSAGE_punct RTCP_MESSAGE_source* RTCP_MESSAGE_eh_* RTCP_MESSAGE_eventtype* RTCP_MESSAGE_unix* RTCP_MESSAGE_cName RTCP_MESSAGE_host RTCP_MESSAGE_index _raw _time RTCP_MESSAGE_splunk*
| stats values(*) as *
I modified my "stats values() as *" to "stats values() as * by RTCP_MESSAGE_callId SIP_RESPONSE_callId" but that didn't work. So I tried renaming both RTCP_MESSAGE_callId and SIP_RESPONSE_callId to just "callId" then do the "stats values(*) as * by callId" but I got nothing either.
What am I doing wrong here? Please help!
... View more
03-12-2019
06:50 PM
I have got 3 queries that I need to join together.
First query has a subsearch. I used a subsearch because I need to find the records that has a fractionLost > 128 for eh_event=RTCP_MESSAGE. From that subsearch I collected the callId. I then use the callId to display the SIP records with the same callId AND has a method of BYE. This query works but not the fastest. Don't know of any other way to do this other than a subsearch.
index=ehop sourcetype=VOIP eh_event=SIP_RESPONSE method=BYE callId=* [ search index=ehop sourcetype=VOIP eh_event=RTCP_MESSAGE fractionLost > 128
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| rex field=senderAddr "xx\.(?<range>\d{1,3})\.\d{1,3}\.\d{1,3}"
| where range >=xx and range <=xxx
| table callId]
| eval SIPTime=strftime(_time, "%Y-%m-%d %H:%M:%S")
| table SIPTime clientAddr serverAddr callId
Now the query above only gives me the columns for the main search; I want to join the columns of the subsearch as well. So I did a join which basically looks like the subsearch above but with all the columns. The query seems to work, albeit slow:
index=ehop sourcetype=VOIP eh_event=SIP_RESPONSE method=BYE callId=* latest=-3h@h [ search index=ehop sourcetype=VOIP eh_event=RTCP_MESSAGE fractionLost > 128
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| rex field=senderAddr "xx\.(?<range>\d{1,3})\.\d{1,3}\.\d{1,3}"
| where range >=xxx and range <=xxx
| table callId]
| eval SIPTime=strftime(_time, "%Y-%m-%d %H:%M:%S")
| table SIPTime clientAddr serverAddr callId
| join type=left callId [ search index=ehop sourcetype=VOIP eh_event=RTCP_MESSAGE fractionLost > 128
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| rex field=senderAddr "10\.(?<range>\d{1,3})\.\d{1,3}\.\d{1,3}"
| where range >=xxx and range <=xxx
| table *
| fields - burstDensity burstDuration change_type clientName date* _raw host index punct source sourcetype splunk* eh_* time* tag* eventtype unix_* -callId]
So now the query above gives me all the columns for both eh_event=SIP_REQUEST and eh_event=RTCP_MESSAGE. BUT I have one more requirement which is to join eh_event=RTP_Tick. Again, I can use the callId to join RTCP_Tick to the rest. This is where I run into issues. I can join RTP_Tick with RTCP_Message just fine (query below) by employing the main search with subsearch technique.
BUT how do i join the query below to the query above so that I displaying all SIP_REQUEST, RTP_Tick and RTCP_Message that has the same callId???
index=ehop sourcetype=VOIP eh_event=RTP_Tick callId=* [ search index=ehop sourcetype=VOIP eh_event=RTCP_MESSAGE fractionLost > 128 callId=*
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| rex field=senderAddr "xx\.(?<range>\d{1,3})\.\d{1,3}\.\d{1,3}"
| where range >=xxx and range <=xxxx
| table callId]
| eval {eh_event}-Time=strftime(_time, "%Y-%m-%d %H:%M:%S")
| table *
| fields - clientName date* _raw host index punct source sourcetype splunk* eh_* time* tag* eventtype unix_* -callId _time version
I tried to do a second join but that didn't work. I only get SIP_Request and RTP_Message events, nothing shows up for RTP_Tick. ANy suggestion on how I can optimize my queries, please let me know as well. I know that join/append/subsearches should be last resort but I couldn't see any other way.
PLease help and thanks in advance!!
... View more
- Tags:
- splunk-enterprise
03-01-2019
07:49 PM
I need to write a query that counts events when 3 criteria are met. First two are easy, they events have to have the same "Timestamp" and "clientdAddr". The 3rd part is where I am having issue: the "to" and "from" fields need to have the same value.
Here is my query so far:
index=voip method=BYE eh_event=SIP_REQUEST method=BYE NOT ( to="<sip:844*" OR to="<sip:855*" OR to="<sip:866*" OR to="<sip:877*" OR to="<sip:888*" OR to="<sip:800*" OR to="<sip:+1844*" OR to="<sip:+1855*" OR to="<sip:+1866*" OR to="<sip:+1877*" OR to="<sip:+1888*" OR to="<sip:+1800*" ) OR (index=extrahop_voip method=BYE eh_event=SIP_REQUEST NOT ( from="<sip:+1844*" OR from="<sip:+1855*" OR from="<sip:+1866*" OR from="<sip:+1877*" OR from="<sip:+1888*" OR from="<sip:+1800*" OR from="<sip:844*" OR from="<sip:855*" OR from="<sip:866*" OR from="<sip:877*" OR from="<sip:888*" OR from="<sip:800*" ) )
| rex field=clientAddr "99\.888\.7\.(?<range>\d{1,3})"
| where range >=60 and range <=80
| rex field=to "<sip:(?:\+\d)?(?<to>\d+)@.+>"
| rex field=from "<sip:(?:\+\d)?(?<from>\d+)@.+>"
| fillnull value=NULL
| search NOT (from=NULL)
| dedup callId
| table Timestamp clientAddr from to
And the sample output would look something like this:
Timestamp clientAddr from to
1551493234 99.888.7.66 5556661234 8004446789
1551493509 99.888.7.66 888234567 5556661234
1551493509 99.888.7.66 5556661234 8004446789
1551486810 99.888.7.80 5556661234 8004446789
As you can see rows 2 & 3 meet the 3 criteria: same Timestamp, clientAddr and the same value in the from field (5556661234 ) in one row exists in the to field (5556661234) of the other row. Doesn't matter that from value "888234567 " doesn't match to value "8004446789" as long as there is at least one pair that matches.
Please help! I am thinking I might need to use multi-valued field like stats values() for this but not sure how to implement exactly.
... View more
- Tags:
- splunk-enterprise
02-22-2019
04:49 PM
Hello @renjith.nair ! Apologies for the late response. I was actually working on my query on and off and had to test a few things but ultimately, your answer about values(field) being multi-valued field and to use mvindex to do the comparison was the answer to my problem! 😃 I saved the mvindex(x,x) value to another variable first and then used that variable for the eval isTrue if statement comparison.
Here is my final query:
index=cisco sourcetype=rcd earliest=-21m@m latest=-5m@m
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| stats count(eval(like(Variable10,"Tx%|NS|%"))) as NS_Count by _time
| streamstats count as pri_key
| streamstats avg(NS_Count) as avg, stdev(NS_Count) as stdev
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval time_5m_value=if(pri_key=4,'NS_Count',"")
| eval time_15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval time_15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval time_15m_prev_avg=if(pri_key=3,'avg',"")
| eval time_15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(time_5m_value) AS FiveMinVal values(time_15m_prev_upperBound) AS PrevFifteenMinUpprBound values(time_15m_prev_lowerBound) AS time_15m_prev_lowerBound values(time_15m_prev_avg) AS time_15m_prev_avg values(time_15m_prev_stdev) AS time_15m_prev_stdev
| eval FiveMinVal=mvappend(FiveMinVal,"")
| eval PrevFifteenMinUpprBound=mvappend(PrevFifteenMinUpprBound,"")
| eval pos1=mvindex(FiveMinVal,0)
| eval pos2=mvindex(FiveMinVal,1)
| eval pos3=mvindex(FiveMinVal,2)
| eval pos4=mvindex(PrevFifteenMinUpprBound,0)
| eval pos5=mvindex(PrevFifteenMinUpprBound,1)
| eval pos6=mvindex(PrevFifteenMinUpprBound,2)
| eval isTrue=if(pos2 > pos5,1,0)
... View more
02-15-2019
01:30 AM
I have a query that has an eval statement that assigns 1 to field 'isTrue' if field 'value1' is greater than field 'value2', otherwise assign 0.
My problem is if field value1 has say a value of 300 and I am comparing it to field value2 which has a value of 0.00, 'isTrue' field says '0' instead of '1'.
However, what confuses the heck out of me is when value2 is non-zero, isTrue field is assigned the correct value!
And as if I wasn't confused enough, if I use makeresults to fake out the values, isTrue field gets assigned the right value when comparing field 'value1' that has a greater than zero value against field 'value2' that has a value if 0.00
Can someone out there please help? What am I missing here? I tried adding quotes, double quotes on the field names but to no avail
Here is my full query. The eval statement is at the bottom.
index=uc sourcetype=rcd
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| stats count(eval(like(Variable10,"Tx%|U|%"))) as U_Count by _time
| streamstats count as pri_key
| streamstats avg(U_Count) as avg, stdev(U_Count) as stdev
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval time_5m_value=if(pri_key=4,'U_Count',"")
| eval time_15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval time_15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval time_15m_prev_avg=if(pri_key=3,'avg',"")
| eval time_15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(time_5m_value) AS value1 values(time_15m_prev_upperBound) AS value2 values(time_15m_prev_lowerBound) AS time_15m_prev_lowerBound values(time_15m_prev_avg) AS time_15m_prev_avg values(time_15m_prev_stdev) AS time_15m_prev_stdev
| eval isTrue=if(value1 > value2, 1, 0)
And here is the makeresults statement that I was testing with that is working just fine when comparing value1 that is greater than 0 against value2 field that is 0.00:
| makeresults count=1 | eval value1=300, value2=0.00, time_15m_prev_lowerBound=0.00, time_15m_prev_avg=0.00, time_15m_prev_stdev=0.00| fields - _time
| eval isTrue=if(value1 > value2,1,0)
Thank you in advance!!
... View more
- Tags:
- splunk-enterprise
02-14-2019
12:15 AM
Thank you so so much for the awesome tip. I tried what you gave and at first it didn't work. Then I simplified the name even more (took out the underscore) and that did the trick! So the last 2 lines look like this now:
| stats values(time_5m_value) AS FiveMinCount values(time_15m_prev_upperBound) AS Prev15mUprBnd values(time_15m_prev_lowerBound) AS time_15m_prev_lowerBound values(time_15m_prev_avg) AS time_15m_prev_avg values(time_15m_prev_stdev) AS time_15m_prev_stdev
| eval isTrue=if("FiveMinCount">"Prev15mUprBnd", 1, 0)
... View more
02-13-2019
09:16 PM
I have a query where I do a bunch of computations, and then at the end of it, I want to add a new field based on the result of a comparison of the numeric values of 2 other fields. Here is that eval statement:
| eval isTrue=if('5m_value'>'15m_prev_upperBound', 1, 0)
'isTrue field' is always assigned 0 regardless of whether field '5m_value' is greater than or not than '15m_prev_upperBound' field! I don't know what I am doing wrong. I tried using case in the eval, but I still get the same results.
FULL QUERY:
index=cisco sourcetype=rcd earliest=-20m@m latest=-5m@m
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| stats count(eval(like(Variable10,"Tx%|NS|%"))) as NS_Count by _time
| streamstats count as pri_key
| streamstats avg(NS_Count) as avg, stdev(NS_Count) as stdev
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval 5m_value=if(pri_key=4,'NS_Count',"")
| eval 15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval 15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval 15m_prev_avg=if(pri_key=3,'avg',"")
| eval 15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(5m_value) as 5m_value values(15m_prev_upperBound) as 15m_prev_upperBound values(15m_prev_lowerBound) as 15m_prev_lowerBound values(15m_prev_avg) as 15m_prev_avg values(15m_prev_stdev) as 15m_prev_stdev
| eval isTrue=if('5m_value'>'15m_prev_upperBound', 1, 0)
... View more
11-30-2018
07:30 PM
Thanks HiroshiSatoh! skoelpin's query actually brought me closer to the solution. I posted my final query above.
... View more
11-30-2018
07:29 PM
Hello again skoelpin! Thank you so much again for the query you provided. That gave me idea on how to come up with my own query. I don't use streamstats often but I will now! 🙂 Here is my final query. I added some stdev calculations. I don't want to get too many false alerts so I thought of calculating upper and lower limits of the standard deviation:
index=uc sourcetype=rcd| where like (Variable10,"Tx|%|NS|%")
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
| timechart span=5m count(Variable10) as NS_Count
| streamstats count as pri_key
| streamstats avg(NS_Count) as avg, stdev(NS_Count) as stdev window=3
| eval avg=round(avg,2)
| eval stdev=round(stdev,2)
| eval lowerBound=(avg-stdev*2)
| eval upperBound=(avg+stdev*2)
| eval 5m_value=if(pri_key=4,'NS_Count',"")
| eval 15m_prev_upperBound=if(pri_key=3,'upperBound',"")
| eval 15m_prev_lowerBound=if(pri_key=3,'lowerBound',"")
| eval 15m_prev_avg=if(pri_key=3,'avg',"")
| eval 15m_prev_stdev=if(pri_key=3,'stdev',"")
| stats values(5m_value) as 5m_value values(15m_prev_upperBound) as 15m_prev_upperBound values(15m_prev_lowerBound) as 15m_prev_lowerBound values(15m_prev_avg) as 15m_prev_avg values(15m_prev_stdev) as 15m_prev_stdev
| eval boolean=if('5m_value'>'15m_prev_upperBound',"1","0")
... View more
11-30-2018
05:49 PM
Hi skoelpin! Thank you so much! I think this gets me close enough to what I need. 🙂 Btw, Are you just doing the last 20 minutes?
... View more
11-19-2018
12:04 PM
I am trying to write a query that will count the number of errors for the last 5 minutes and then I want to compare if that error total is greater than the average for the previous 15 minutes. Then I want to trigger an alert if series s0 is greater than the mean (for alerting purposes).
I found part of the solution from Splunk Answers (https://answers.splunk.com/answers/151921/how-to-set-up-alert-when-error-count-of-latest-week-is-greater-than-average-of-all-weeks-in-past-30-days.html), but my eval expression to calculate the mean field does not seem to work; the field is just empty. What am I not doing right here?
Query is below:
index=cisco_uc sourcetype=RCD| where like (Variable10,"Tx|%")
| bucket _time span=5m
| stats latest(Variable10) as Variable10 by _time Variable2
|timechart span=5m count(eval(like(Variable10,"%|U%"))) as U_Count
|timewrap 5min series=short
|eval mean=(U_Count_s1 + U_Count_s2 + U_Count_s3)/3
|where U_Count_s0 > mean
... View more
- Tags:
- splunk-enterprise
10-16-2018
06:44 PM
I have the query that gives me the results I need. I just wanted to ask the gurus out here to look at my SPL and if there is a more efficient way to do it then I'd love to hear it. 😃 Thanks in advance!
Here's the SPL:
index=abc (sourcetype=abc_MainReportLog "Entered Phone Number" Phone!=1234567890) OR ( sourcetype=abc_core_MainReportLog "\|RemoteApplicationData\|" VH_ICMUUI_CALLVARIABLE3!=1234567890 CV7=*)
| rename CALLVARIABLE3 as Phone
| bucket _time span=1m
| stats dc(CallID) as Count by _time Phone CV7
| where Count >=2
| timechart span=1h sum(Count) as Total
| fillnull value=0
... View more
- Tags:
- splunk-enterprise
10-16-2018
04:42 PM
Ahh, it was just the order of the fields within the eval. Thank you very much!=) Working now.
... View more
09-28-2018
05:32 PM
Thank you very much! This get me 90% of desired outcome. What I am missing is if CALLVARIABLE3 and CALLVARIABLE6 are both NOT empty AND does not contain the same value, count CALLVARIABLE6. In the query below, if CALLVARIABLE3 is not the same as CALLVARIABLE6, CALLVARIABLE3 is assigned to field Final. I tried doing "CALLVARIABLE3 != CALLVARIABLE6" but that doesn't seem to work. 😞
index=abc sourcetype=MainReportLog "|RemoteApplicationData|"
| eval CALLVARIABLE6=if(CALLVARIABLE6="",null, CALLVARIABLE6), CALLVARIABLE3=if(CALLVARIABLE3="",null, CALLVARIABLE3)
| eval Final=coalesce(CALLVARIABLE3, CALLVARIABLE6)| table CALLVARIABLE3 CALLVARIABLE6 Final
Thanks in advance!
... View more
09-28-2018
04:29 PM
Thank you very much! This gets me 90% of the desired outcome. 😃 I am missing scenario when CALLVARIABLE3 and CALLVARIABLE6 is not null AND NOT EQUAL to each other, count CALLVARIABLE6 instead. Doesn't look like I can do straight CALLVARIABLE3 != CALLVARIABLE6. What would be the right syntax?
index=abc sourcetype=MainReportLog "|RemoteApplicationData|"
| eval CALLVARIABLE6=if(CALLVARIABLE6="",null, CALLVARIABLE6), CALLVARIABLE3=if(CALLVARIABLE3="",null, CALLVARIABLE3)
| eval Final=coalesce(CALLVARIABLE3, CALLVARIABLE6)| table CALLVARIABLE3 CALLVARIABLE6 Final
Thanks in advance!!
... View more
09-26-2018
07:24 PM
How do I go about the counting the following scenario:
1) CALLVARIABLE3 and CALLVARIABLE6 both empty, do not count
2) CALLVARIABLE3 is empty and CALLVARIABLE6 has value, count CALLVARIABLE6
3) CALLVARIABLE3 is not empty and CALLVARIABLE6 is empty, count CALLVARIABLE3
4) CALLVARIABLE3 and CALLVARIABLE6 both not empty and has the same value, count CALLVARIABLE6
5) CALLVARIABLE3 and CALLVARIABLE6 both not empty but does not have same value, count CALLVARIABLE6
Sample raw data below:
XXXXXXXYYYYYYYZZZZZZ|RemoteApplicationData|("CALLVARIABLE1"="","CALLVARIABLE2"="152574786091","CALLVARIABLE3"="2021212324,N,0000000,000,0","CALLVARIABLE4"="000,000,0,0000000,S,4,1,071011,N,0","CALLVARIABLE5"="Z1525740000786091","CALLVARIABLE6"="2021212324,8889991234,8889991234","CALLVARIABLE7"="L_SPEAK_FREELY","CALLVARIABLE8"="A,D,01,C,0,0,0,0,1,0,00,002,0,G,0,1,0,0","CALLVARIABLE9"="","CALLVARIABLE10"="N,,","APPLICATIONDATA"="1 0 user.XfrReason R 1 0 user.EndPtCode X 1 0 user.LstPrmpt PS1394 1 0 user.ANIMatch`T")|2018-09-26 19:03:11.808
Thank you in advance!
... View more
- Tags:
- splunk-enterprise
09-26-2018
07:10 PM
Hi MuS,
Thank you for taking time to look at my question. For your first suggestion (still using join), that was what I was doing initially, I had CallID included in the 'stats count 'but it was still not giving me the CallID 😞 That was why I went ahead and posted my question here in Splunk Answers.
Your second suggestion does give me data including the CallID but output isn't quite right.
Maybe raw data will help??
Raw data for main search looks data below. The data before the first | delimiter is my CallID and the Phone is data after the second | delimiter. (I did field extractions so that CallID and Phone are automatically detected):
XXXXXXXYYYYYYYZZZZZZ|Entered Phone Number|2021212324|2018-09-26 18:57:12.000
ZZZZZMMMMMMTTTTTT|Entered Phone Number|3031323334|2018-09-26 18:57:11.531
Now I was attempting to join the main search with subsearch because I needed CV7. Raw Data for the join subsearch looks like data below. Again I did a field extraction to automatically extract CV7 which in the data below is called CALLVARIABLE7. Data before the first | delimiter is my CallID:
XXXXXXXYYYYYYYZZZZZZ|RemoteApplicationData|("CALLVARIABLE1"="","CALLVARIABLE2"="152574786091","CALLVARIABLE3"="2021212324,N,0000000,000,0","CALLVARIABLE4"="000,000,0,0000000,S,4,1,071011,N,0","CALLVARIABLE5"="Z1525740000786091","CALLVARIABLE6"="2021212324,8889991234,8889991234","CALLVARIABLE7"="L_SPEAK_FREELY","CALLVARIABLE8"="A,D,01,C,0,0,0,0,1,0,00,002,0,G,0,1,0,0","CALLVARIABLE9"="","CALLVARIABLE10"="N,,","APPLICATIONDATA"="1 0 user.XfrReason R 1 0 user.EndPtCode X 1 0 user.LstPrmpt PS1394 1 0 user.ANIMatch`T")|2018-09-26 19:03:11.808
The end result really is I only want to see Phone CallID CV7 if count is >=2 (meaning phone # appeared more than once within the same minute).
Any suggestion is much appreciated!
... View more
09-25-2018
07:45 PM
minor correction to the query:
index=ABC sourcetype=ABC_MainReportLog "Entered Phone Number" Phone!=1234567890
| dedup CallID
| table CallID Phone _time
| join type=inner CallID
[ search index=ABC sourcetype=ABC_core_MainReportLog "|RemoteApplicationData|" CV7=*
| dedup CallID]
| fields Phone, CV7, CallID
| bucket _time span=1m
| stats count(CallID) as Count by _time Phone CV7
| where Count >=2
... View more
09-25-2018
07:41 PM
I need to run a report that gives me phone numbers that appeared >=2 within the same minute and with the corresponding CallID and then I need to do an inner join to include another data called CV7. I was able to accomplish that with this query:
index=ABC sourcetype=ABC_MainReportLog "Entered Phone Number" Phone!=1234567890
| dedup CallID
| table CallID Phone _time
| join type=inner CallID
[ search index=ABC sourcetype=ABC_core_MainReportLog "\|RemoteApplicationData\|" CV7=*
| dedup CallID]
| fields Phone, State, CallID
| bucket _time span=1m
| stats count(CallID) as Count by _time Phone State
| where Count >=2
The query above gives me _time Phone State and Count. But I also need to display the CallID. How do I do that?
Thanks in advance!
... View more
- Tags:
- splunk-enterprise
08-08-2018
08:30 PM
I have a query that counts events from 30 days ago to current day but I filter the results so that I am only getting the count of events for the days in those 30 days that equals the current week day. So that if today is Wednesday, I would only see the count of events for all the Wednesdays in the last 30 days. Query is below:
index=abc sourcetype=abc_proxy(Action=InteractionQueued OR Action=InteractionDequeued) earliest=-30d@d latest=now()| timechart span=1d count| eval day_of_week = lower(strftime(_time, "%A")), now = lower(strftime(now(), "%A"))| where day_of_week = now|fields - day_of_week - now
Table is kind of big because I am doing hourly breakdowns but a portion of the output looks like this:
_time count
2018-07-11T00:00 7872
2018-07-11T01:00 5741
2018-07-11T02:00 6480
2018-07-11T03:00 10198
2018-07-11T04:00 11394
2018-07-11T05:00 17033
2018-07-11T06:00 17464
2018-07-11T07:00 21961
2018-07-11T08:00 28636
2018-07-11T09:00 27801
2018-07-11T10:00 28537
2018-07-11T11:00 27996
2018-07-11T12:00 24798
2018-07-11T13:00 27681
2018-07-11T14:00 25653
2018-07-11T15:00 32204
2018-07-11T16:00 32450
2018-07-11T17:00 23217
2018-07-11T18:00 23988
2018-07-11T19:00 22152
2018-07-11T20:00 19021
2018-07-11T21:00 19446
My problem now is with the visualization. Right now I get one continuous line for all the week days that match current weekday. I would like to have a line graph where each day is a separate line in the graph and where each line has an hourly granularity.
I wish I could attach images but I don't have enough points. BUt basically whatever the output is of the query, switch to visualization tab and select line graph.
Thanks in advance
... View more
- Tags:
- splunk-enterprise
08-08-2018
07:35 PM
Hi Frank,
Exactly, those multiple occurrences belong to one event. In my case, 1 event is actually one xml log file. And that's why per event, I only want to count the last occurrence of the string and then move on to next event.
Thank you!
... View more