Activity Feed
- Got Karma for Re: Need Help Designing Total Outage Alerts. 03-15-2021 06:53 AM
- Posted Re: Need Help Designing Total Outage Alerts on Splunk Search. 03-14-2021 03:29 PM
- Posted Re: Need Help Designing Total Outage Alerts on Splunk Search. 03-14-2021 02:55 PM
- Posted Re: Need Help Designing Total Outage Alerts on Splunk Search. 03-14-2021 02:48 PM
- Karma Re: Need Help Designing Total Outage Alerts for richgalloway. 03-14-2021 02:20 PM
- Posted Re: Need Help Designing Total Outage Alerts on Splunk Search. 03-13-2021 08:21 AM
- Posted Need Help Designing Total Outage Alerts on Splunk Search. 03-12-2021 05:05 PM
- Posted Re: How do I make a string into a field / variable? on Splunk Enterprise. 02-05-2021 11:27 AM
- Posted How do I make a string into a field / variable? on Splunk Enterprise. 02-04-2021 01:14 PM
- Posted Restricting date range in data from inputlookup on Splunk Enterprise. 12-29-2020 04:27 PM
- Posted Customized Sum Condition on Splunk Search. 11-16-2020 04:46 PM
- Posted Re: How to transform text type token values on dashboard? on Dashboards & Visualizations. 10-15-2020 10:54 AM
- Karma Re: How to transform text type token values on dashboard? for niketn. 10-15-2020 10:53 AM
- Got Karma for How to transform text type token values on dashboard?. 10-11-2020 08:18 AM
- Posted How to transform text type token values on dashboard? on Dashboards & Visualizations. 10-09-2020 03:10 PM
- Tagged How to transform text type token values on dashboard? on Dashboards & Visualizations. 10-09-2020 03:10 PM
- Tagged How to transform text type token values on dashboard? on Dashboards & Visualizations. 10-09-2020 03:10 PM
- Karma Re: What happened to the transposed dates? for richgalloway. 06-05-2020 12:51 AM
- Karma Re: How to return a single value from a subsearch into eval Part 2 for richgalloway. 06-05-2020 12:51 AM
- Posted How to return a single value from a subsearch into eval Part 2 on Splunk Search. 05-20-2020 12:07 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
03-14-2021
03:29 PM
1 Karma
Figured it out... | inputlookup append=true Provider_Alert.csv where Tier=Tier1 Active="TRUE" | join type=outer [ search index=wsi sourcetype=fdpwsiperf intuit_offeringid IN ("Intuit.tax.ice.ice", "Intuit.platform.turbotaxwindows","Intuit.tax.ctg.ice.109ximportwidget","Intuit.platform.turbotaxipad.turbotaxmac") api_version=* tier=Tier1 | eval offering=if(in(intuit_offeringid,"Intuit.tax.ice.ice","Intuit.tax.ctg.ice.109ximportwidget"),"TTO","TTD") | eval Provider_ID=coalesce(partnerId, legacy_id) | search Provider_ID!=*test* | chart dc(intuit_tid) as import_activity OVER Provider_ID BY offering | fields Provider_ID *] | fillnull
... View more
03-14-2021
02:55 PM
Tinkered around a little... | rename partner_idd as Provider_ID This achieves renaming partner_idd based on splunk events and wiping away Provider_ID from Provider_Alert.csv | rename Provider_ID as partner_idd Wiped away partner_idd from actual splunk events and renamed data in Provider_Alert.csv
... View more
03-14-2021
02:48 PM
@richgalloway -- so this solves the join problem between mutual values in Provider_Alert.csv and my data from Search_A, however, I still can't detect outage situations 😞 Values from Provider_Alert.csv that are not in Search_A are gone. For example, I know we had a service outage at a very specific time...so I'm looking for that Provider_ID (676) from Provider_Alert.csv present in my stats results with 0. The query below with your suggestions doesn't yield any results 😞 index=wsi sourcetype=fdpwsiperf intuit_offeringid IN ("Intuit.tax.ice.ice", "Intuit.platform.turbotaxwindows","Intuit.tax.ctg.ice.109ximportwidget","Intuit.platform.turbotaxipad.turbotaxmac") api_version=* tier=Tier1 | eval offering=if(in(intuit_offeringid,"Intuit.tax.ice.ice","Intuit.tax.ctg.ice.109ximportwidget"),"TTO","TTD") | eval partner_idd=coalesce(partnerId, legacy_id) | eval form_type=if(like(capability,"109%"), "109X",'capability') | eval partner_name=coalesce(partnerId, partner_uid) | search partner_name!=*test* | eval combo='partner_idd'+"::"+'form_type'+"::"+'partner_name'+"::"+'api_version' | chart dc(intuit_tid) as import_activity OVER combo BY offering | eval partner_idd=mvindex(split(combo,"::"),0) | eval api_version=mvindex(split(combo,"::"),3) | fields partner_idd * | inputlookup append=true Provider_Alert.csv | rename partner_idd as Provider_ID | stats values(*) as * by Provider_ID api_version | where Provider_ID=676
... View more
03-13-2021
08:21 AM
Hey @richgalloway , thanks for passing along. I followed the high level concept of the blog, but I'm stuck on my use case which is much more complex. My file (Provider_Alert.csv) has several columns. So everything appended is blank and I'm not sure how to "join" the mutual fields together. Based on the article, so far I have this: splunk_server=indexer* index=wsi sourcetype=fdpwsiperf intuit_offeringid IN ("Intuit.tax.ice.ice", "Intuit.platform.turbotaxwindows","Intuit.tax.ctg.ice.109ximportwidget","Intuit.platform.turbotaxipad.turbotaxmac") api_version=* | eval offering=if(in(intuit_offeringid,"Intuit.tax.ice.ice","Intuit.tax.ctg.ice.109ximportwidget"),"TTO","TTD") | eval partner_idd=coalesce(partnerId, legacy_id) | eval form_type=if(like(capability,"109%"), "109X",'capability') | eval partner_name=coalesce(partnerId, partner_uid) | search partner_name!=*test* | eval combo='partner_idd'+"::"+'form_type'+"::"+'partner_name'+"::"+'api_version' | chart dc(intuit_tid) as import_activity OVER combo BY offering | eval partner_idd=mvindex(split(combo,"::"),0) | eval api_version=mvindex(split(combo,"::"),3) | fields partner_idd * | inputlookup append=true Provider_Alert.csv The resulting table is this: Is there a way to associate partner_idd with the field called Provider_ID from Provider_Alert.csv before appending so that there's data filled in for mutual matches?
... View more
03-12-2021
05:05 PM
Hi there, I'm having a really hard time creating an alert based of a search that detects the absence of events. I have a list of total customers we monitor contained in a .csv in Splunk called Provider_Alert.csv My goal is to create (in SQL terms) a "left" join where my "left" table is all the Providers from Provider_Alert.csv and the second joined table is based off of Splunk logged events (let's call this Search_A), where if there was no match the absence of events would be 0. An additional challenge I'm having is that the mutual field that join Provider_Alert.csv and Search_A is one I have to derive with eval and coalesce statements (let's call this partner_idd)...since it's split in two fields in Search_A. So TL;DR: I'd like to join: All entries in Provider_Alert.csv JOIN WHATEVER EVENTS ARE AVAILABLE FROM Search_A | eval partner_idd =coalesce(field1, field2) | JOIN ON partner_idd And if there are no results from the JOIN, then it's 0. Also happy to take recommendations, I've spent a whole afternoon on this so I'm desperate and open for any recommendations.
... View more
Labels
- Labels:
-
join
02-05-2021
11:27 AM
Thank you @scelikok !!
... View more
02-04-2021
01:14 PM
Hello, my raw logs look something like this: Example 1: 2021-02-03 23:59:07,216 LogLevel=INFO my_appid= intuit_tid=EFEtoPBI805aaa9f-9254-499b-ae80- 2c39ca7b33cd provider_tid=fe62e521-a9c6-4d3a-8c45-a25a10abd5ac class=com.intuit.fds.provider.dao.impl.EinDAOImpl Get disabled ein by ein=821570477 Example 2: 2021-02-03 23:59:07,216 LogLevel=INFO my_appid= intuit_tid=EFEtoPBI805aaa9f-9254-499b-ae80-2c39ca7b33cd provider_tid=fe62e521-a9c6-4d3a-8c45-a25a10abd5ac class=com.intuit.fds.provider.service.impl.EinServiceImpl Create or update ein=821570477 einVO=EinVO [ein=821570477, active=false, einProviderRelationships=EinProviderRelationshipsVO [einProviderRelationship=[EinProviderRelationshipVO [id=null, active=null, providerId=5ece3c4d-6791-4bed-bbf5-fd9c0736c129, taxYear=2020, serviceName=W2, actualAvailabilityDate=2021-02-03T23:59:07.172-08:00, expectedAvailabilityDate=2021-02-03T23:59:07.172-08:00, preference=1, synced=false]]]] My goal is to create a single field / variable (let's call it action_type) where the value of that field is determined by the presence of the string "Create or update" (action_type=add) or "Get disabled" (action_type=disable). My struggle is that these strings aren't associated with any fields, so I'm not sure how to have my eval include the LIKE function without defining a field. Please help! My work: [base query] |eval action_type=CASE( LIKE(??, "Get disabled"), "disable", LIKE(??, "Create or update"), "add", 1==1, "null")
... View more
Labels
12-29-2020
04:27 PM
My goal is to make a report that has running total (cumulative) data across years. Current year data is queried from Splunk while prior year data is all housed in a lookup (called TY19_Splunk_total_data.csv). My issue is that this report will be on a dashboard that has date range selectors. When the date range is selected, the streamstats works correctly for current year data (since it isolates the data from dates selected in the range THEN adds) but not for prior year data because I don't know how to restrict data in the inputlookup by "date" + 1 yr while at the same time, having the tokens apply to my base splunk search. Hopefully that makes sense... here's the query I'm working with [base query] year=<current_year> | timechart span=1d dc(intuit_tid) as current_year_data | streamstats sum(current_year_data) as current_year_data | eval time=strftime(_time,"%m-%d") | join time [| inputlookup TY19_Splunk_total_data.csv | eval token_time=relative_time(strptime(time,"%m/%d/%Y"),"+1y") | where capability="W2" and token_time>=$time.earliest$ and token_time<$time.latest$ | eval time=strftime(strptime(time,"%m/%d/%Y"),"%m-%d") | stats sum(attempts) as prior_year_data by time | streamstats sum(prior_year_data ) as prior_year_data | fields time prior_year_data ] | fields time current_year_data prior_year_data | fields - _time
... View more
- Tags:
- inputlookup
Labels
- Labels:
-
troubleshooting
11-16-2020
04:46 PM
I have the following query: splunk_server=indexer* index=wsi sourcetype=fdpwsiperf (channel_type=ofx2 OR agent_service=OfxAgent) domain=tax api_version=v1 capability=* tax_year=2019 NOT *test* NOT *jmeter-automation* ofx_codes!="[15500,2000]" | lookup Provider_Alert.csv Provider_ID AS partnerId OUTPUT Tier Form_Type | search Tier=Tier1 | eval time_bucket=case(_time>=relative_time(now(),"-1h"), "last_hour", 1==1, "prior_hour") | eval error_type=case(error_code_host="2000", "OFX_2000", error_code_service IN ("5000","5001"), "provider_unavailable", like(http_status_code_host,"5%"), "HTTP_500",1==1,"null") | eval combo=partnerId."::".provider_id."::".Form_Type."::".host_base_url."::".error_type | chart dc(intuit_tid) as total_requests by combo time_bucket | eval partnerId=mvindex(split(combo,"::"),0) | eval provider_id=mvindex(split(combo,"::"),1) | eval Form_Type=mvindex(split(combo,"::"),2) | eval host_base_url=mvindex(split(combo,"::"),3) | eval error_type=mvindex(split(combo,"::"),4) | fields partnerId provider_id Form_Type host_base_url error_type last_hour prior_hour Which produces a table, where the following result is possible: partnerId provider_id Form_type host_base_url error_type last_hour prior_hour partner1 XYZ FormA urlB null 50 30 partner1 XYZ FormA urlB HTTP 500 12 20 partner2 ABC FormB urlZ null 20 30 I would like to add a column that sums values in last_hour according to grouping by partnerId, so that the above example, I would have another column (ie. extra_column) that has 62 (ie. 50 + 12 = 62) in the two rows for partner1. Extra note: I need the volume breakdown by error_type, but not in a chart format. How can I achieve this?
... View more
10-15-2020
10:54 AM
This is perfect @niketn ! Thank you! QQ -- What does this do in the <search>? | fields - _time I removed it and my dashboard kept scrolling all the way to the bottom... I added it back then it behaved correctly. I'm not sure if my Dashboard was misbehaving due the absence of this or for some other reason...
... View more
10-09-2020
03:10 PM
1 Karma
Hi there, I have a dashboard with a text box field. My goal is to make this field more flexible for different input variations so that you don't need exact text match to get a result. Basically how can I transform the token value of inputs so that the token value in my SPL query is transformed so it's all lower case without spaces? For example, if someone searches for any of the following: "Bank of America" "bank of America" "bank of america" "BANK OF AMERICA" The token value for all entries above will be transformed to "bankofamerica" (no space, all lower), which will THEN be used as a search value in my SPL query. I just know the basics of token usage, but haven't done any transforms and the documentation is confusing...please help!!
... View more
Labels
- Labels:
-
token
05-20-2020
12:07 PM
I found a different answer article with an example of what I'm trying to do, but I can't get it to work on my end.
I'd like to calculate a value using eval and subsearch (adding a column with all row values having this single calculated value). I've replicated what the past article advised, but I'm getting a "Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr])." message. I've also identified that it's the eval with the subsearch causing this, because the query works when removing that function.
Past article with same question: https://answers.splunk.com/answers/240798/how-to-return-a-single-value-from-a-subsearch-into.html
Here's my query
splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* provider_id=* partnerId=*
capability=* error_msg_service=* http_status_code_host=* ofx_schema_response_error!=null
| eval ofx_schema_response_error= [eval statements unimportant for this example]
| stats dc(intuit_tid) as schema_error dc(eval(if(error_msg_service="OK", intuit_tid, null()))) as successful_imports by
ofx_schema_response_error
| eval total_events =
[search splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* provider_id=*
partnerId=* capability=* error_msg_service=* http_status_code_host=*
| stats dc(intuit_tid) as total_events
| return total_events]
| eval failed_imports = schema_error - successful_imports
| sort - schema_error
Thanks!
... View more
05-20-2020
11:33 AM
@to4kawa -- sorry, that was just a test case against that partner. The result of that subsearch can be any partner where the where condition applies
... View more
05-15-2020
02:26 PM
I found a work-around with the sub-search in the beginning... but I am open to more graceful and creative ways of doing this... as this is incredibly clunky
splunk_server=indexer* index=wsi sourcetype=fdpwsiperf (channel_type=ofx2 OR agent_service=OfxAgent) domain=tax
api_version=v1 capability=* tax_year=2019 partnerId!=*test*
[search splunk_server=indexer* index=wsi sourcetype=fdpwsiperf (channel_type=ofx2 OR agent_service=OfxAgent) domain=tax
api_version=v1 capability=* tax_year=2019 partnerId!=*test* partnerId=*
| lookup Provider_Alert.csv Provider_ID AS partnerId OUTPUT Tier Form_Type
| search Tier=Tier1
| eval error_category=case(like(http_status_code_host,"5%"), "5XX", like(http_status_code_host,"4%"),"4XX",
http_error_host=timeout_event, 'http_error_host', 1==1, "Other")
| chart dc(intuit_tid) OVER partnerId by error_category
| addtotals fieldname="total_events"
| eval error_rate=round(((total_events-Other)/total_events)*100,2)
| where total_events >= 25 AND error_rate >= 40
| fields partnerId]
| lookup Provider_Alert.csv Provider_ID AS partnerId OUTPUT Tier Form_Type
| search Tier=Tier1
| eval capability=if(like(capability,"109%"),"1099",'capability')
| eval error_category=case(like(http_status_code_host,"5%"), "5XX", like(http_status_code_host,"4%"),"4XX", http_error_host="Read
timed out", 'http_error_host', 1==1, "Other")
| where error_category!="Other"
| table _time, partnerId, intuit_tid, error_category, capability, tax_year, ofx_appid, host_base_url
| sort 10 - _time
| rename intuit_tid as TRNUID
... View more
05-15-2020
02:23 PM
@richgalloway -- I totally see how that's confusing. Apologies. It's a complex question and it's hard to describe.
I want a solution that outputs raw data in a table... but will only do so if the aggregations of that raw data meet certain criteria (ie. "if total count is more than 10, spit out raw data")
... View more
05-15-2020
01:24 PM
My goal is to design an alert that will populate a table of raw results, but only when certain evaluation aggregates apply. For example, if the total count of events in a time frame >100, post table of raw data. How do I achieve this limitation (similar to SQL "Having"), while reserving my desired table output?
My query so far, which reflects the table output I desire without the "Having" logic:
splunk_server=indexer* index=wsi sourcetype=fdpwsiperf (channel_type=ofx2 OR agent_service=OfxAgent) domain=tax
api_version=v1 capability=* tax_year=2019 partnerId!=*test* partnerId="ADP"
| lookup Provider_Alert.csv Provider_ID AS partnerId OUTPUT Tier Form_Type
| search Tier=Tier1
| eval capability=if(like(capability,"109%"),"1099",'capability')
| eval error_category=case(like(http_status_code_host,"5%"), "5XX", like(http_status_code_host,"4%"),"4XX", http_error_host="Read
timed out", 'http_error_host', 1==1, "Other")
| table _time, partnerId, intuit_tid, error_category, capability, tax_year, ofx_appid, host_base_url
| rename intuit_tid as TRNUID
Do not direct me to "From SQL to Splunk SPL" documentation. I've reviewed it, and it's not helpful for my use case.
Thanks!
... View more
05-01-2020
09:34 AM
So I think it might be a problem with the data in the lookup , but I don't understand what's wrong. When I do this basic query:
| inputlookup TY18_Splunk_total_data.csv
| where capability="109X"
I can see data against dates 1/2/2019 - 1/9/2019
Aggregating this data, I lose the data against these dates for some reason, and I don't understand why. I ran this query which generated these results. I'm at a loss...
| inputlookup TY18_Splunk_total_data.csv
| where capability="109X"
| stats sum(attempts) by _time
... View more
05-01-2020
09:29 AM
Hi @kmugglet, thanks for the suggestion. I tried this and it didn't work :(. Join is what I'm looking for because I'm comparing days (represented by the row) across different years (represented by the two columns).
... View more
04-30-2020
03:37 PM
Here is my query (time range is YTD):
(splunk_server=indexer* index=wsi_tax_summary sourcetype=stash capability=109* tax_year=2019 ein=* intuit_offeringid=*
partnerId!=*test* partnerId=*)
| timechart span=1d dc(intuit_tid) as 19attempts
| streamstats sum(19attempts) as 19attempts
| eval time=strftime(_time,"%m-%d")
| join type=left time
[ inputlookup TY18_Splunk_total_data.csv
| where capability="109X"
| stats sum(attempts) as 18attempts by _time
| streamstats sum(18attempts) as 18attempts
| eval time=strftime(strptime(_time,"%m/%d/%Y"), "%m-%d")
| fields time 18attempts]
| fields time 19attempts 18attempts
| rename 19attempts as "TY19"
| rename 18attempts as "TY18"
I understand a left join to mean that if the results from my subsearch don't match with the main search, it won't be included. If I run the query above, I get data in TY18 column from 01-02 thru 01-09 (below).
I didn't expect data against those dates, so I copied the subsearch and ran it in a separate search window, and I can see (as I expected) there's no data from 01-02 thru 01-09 (below).
Am I not understanding something about join type? What's happening here?
... View more
04-09-2020
03:48 PM
I made a query that involves transposing a timechart (span=1w, analyzing since 1/1/2020).
The result is the exact layout I want, however, several columns representing dates after the transpose are missing (ie. nothing in February showed up).
Is there a limit in splunk how many columns are transposed?
Query:
splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* capability=*
error_msg_service=* http_status_code_host=*
| timechart span=1w dc(intuit_tid) as total_requests, dc(eval(if(error_msg_service="OK", intuit_tid,null))) as total_success
| sort -_time
| eval _time=strftime(_time,"%m/%d/%y")
| eval total_failures=total_requests-total_success
| eval success_rate= ROUND((total_success/total_requests)*100,2)
| transpose header_field=_time column_name=week_starting
| regex week_starting!=("^_")
| eval sortkey=case(week_starting="total_requests",1, week_starting="total_success", 2, week_starting="total_failures", 3,
week_starting="success_rate", 4)
| sort sortkey
| fields - sortkey
... View more
04-03-2020
02:33 PM
I figured it out. It's the fieldformat that's the constraint. I removed it and I achieved what I wanted.
... View more
04-03-2020
02:31 PM
hello! This is probably a simple answer that I'm not understanding.
Running the query below will add a column at the very end called "success_rate". I don't want this since, since I've transposed that field to the first row. Seems like the eval from line 4 is still trying to calculate...? How do I get rid of it? | field - success_rate doesn't work 😞
index=wsi_tax_summary sourcetype=stash partnerId=* error_msg_service=* tax_year=2019 capability=* intuit_tid=* capability=*
| eval error_msg_service = case(match(error_msg_service, "OK"), "Success", 1==1, "Fail")
| timechart span=1w dc(intuit_tid) by error_msg_service
| fillnull
| eval total=Fail+Success, success_rate=round(((Success/total)*100),2)
| fieldformat success_rate=tostring('success_rate')+"%"
| fields _time, total, Success, Fail, success_rate
| eval _time=strftime(_time,"%m-%d-%Y")
| transpose column_name="Week Starting" header_field=_time
| regex "Week Starting"!=("^_")
| fields - success_rate
... View more
03-19-2020
03:57 PM
Also just to note, I followed an example in the documentation around applying streamstats first and organizing it as a table with _time field, however, it doesn't look like streamstats can accommodate embedded stats functions (I'm applying a sum to the count of a field).
splunk_server=indexer* index=wsi sourcetype=fdpwsiperf partnerId!=*test* partnerId=*
error_msg_service=* tax_year=2019 ofx_appid=* capability=*
| eval error_msg_service = case(error_msg_service="OK", "Success", match(error_msg_service,
"Provider/Host Request Error"), "HTTP Request Error", match(error_msg_service, "Provider/Host Response
Error"), "HTTP Response Error", match(error_msg_service, "Provider/Host Not Available"), "Server Exception",
1==1, "Import Failure")
| streamstats sum(dc(intuit_tid)) as import_requests by error_msg_service
| table _time, import_requests, error_msg_service
... View more
03-19-2020
03:50 PM
Hi there,
I'm trying to create a time series data using streamstats function. Got it figured out, but is there any way to avoid the rename function and have the new columns produced by streamstats replace the old values?
I read the streamstats documentation, but didn't see any optional fields applied for my use cases (or I may not have understood it).
splunk_server=indexer* index=wsi sourcetype=fdpwsiperf partnerId!=*test* partnerId=* error_msg_service=* tax_year=2019
ofx_appid=* capability=*
| eval error_msg_service = case(error_msg_service="OK", "Success", match(error_msg_service, "Provider/Host Request Error"),
"HTTP Request Error", match(error_msg_service, "Provider/Host Response Error"), "HTTP Response Error",
match(error_msg_service, "Provider/Host Not Available"), "Server Exception", 1==1, "Import Failure")
| timechart span=10m dc(intuit_tid) by error_msg_service
| streamstats sum
| rename "sum(Success)" as "Success", "sum(HTTP Request Error)" as "HTTP Request Error", "sum(HTTP Response Error)" as "HTTP
Response Error", "sum(Import Failure)" as "Import Failure", "sum(Server Exception)" as "Server Exception"
Thanks!
... View more