All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@jason_hotchkiss  QQ:  Any custom JS you are using across dashboards?   KV
HI @gcusello  I have previously mentioned that we receive 13 device statuses in a single payload. I am attempting to set up an alert for each device. However, the current query, where I extract a ... See more...
HI @gcusello  I have previously mentioned that we receive 13 device statuses in a single payload. I am attempting to set up an alert for each device. However, the current query, where I extract a single device serial number, is not functioning as expected and the alert condition also not working. Could you please check. index= "YYYYYYY" "Genesys system is available" response_details.response_payload.entities{}.onlineStatus="*" response_details.response_payload.entities{}.serialNumber="*" | rename "response_details.response_payload.entities{}.onlineStatus" as status | rename "response_details.response_payload.entities{}.serialNumber" as SerialNumber | where SerialNumber="XXXXXX" | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earliest(eval(if(status="offline",_time,""))) AS offline earliest(eval(if(status="online",_time,""))) AS online | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online", offline_count>0 AND online_count>0 AND offline>online, "Offline", offline_count=0 AND online_count=0, "No data") | search condition="Offline" OR condition="Offline but newly online" | table condition
Few different ways to approach this if I understand you problem correctly | makeresults | fields - _time | eval Region=split("Bangalore|seattle|bangalore|Galveston|sh bangalore T... See more...
Few different ways to approach this if I understand you problem correctly | makeresults | fields - _time | eval Region=split("Bangalore|seattle|bangalore|Galveston|sh bangalore Test", "|") ``` Different Eval Methods ``` | eval test_loc_method1=mvfilter(match(Region, "(?i)bangalore")), test_loc_method2=mvdedup( case( isnull(Region), null(), mvcount(Region)==1, if(match(Region, "(?i)bangalore"), "Bangalore", null()), mvcount(Region)>1, mvmap(Region, if(match(Region, "(?i)bangalore"), "Bangalore", null())) ) ) ``` Rex Method ``` | rex field=Region "(?<rec_loc>(?i)bangalore)"  Mostly just depends on how you want the outputted eval field to look. test_loc_method2 gives a clean single value result with a hardcoded result given that the regex pattern is found somewhere in the multivalue field.  
and actually you probably mean eval test_loc=case(isnotnull(mvfind(Region,".*bangalore.*")), Bangalore)
It requires regex, so you can't use SQL style % nor simple wildcard, use .*
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving pa... See more...
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving part of eval statement here Example  : Region =  "sh bangalore Test" The above eval statement should work on this Region and set test_loc = Bangalore. I tried passing * and % (*bangalore*, %bangalore%) , but am getting error.  Please help me. Thanks , poojitha NV
@dtburrows3  Thank you very much for your assistance. The query works perfectly without: | where 'days_since_last_login'>14 I tried to play with the number of days after > , but it is still failing... See more...
@dtburrows3  Thank you very much for your assistance. The query works perfectly without: | where 'days_since_last_login'>14 I tried to play with the number of days after > , but it is still failing (returning no events). Other than that, everything works well.
The error contains the regular expression, obviously, but the search may help you narrow down the source file location. Running btool on your Splunk instance(s) is more helpful: $ $SPLUNK_HOME/bin/s... See more...
The error contains the regular expression, obviously, but the search may help you narrow down the source file location. Running btool on your Splunk instance(s) is more helpful: $ $SPLUNK_HOME/bin/splunk cmd btool props list --debug | grep -- EXTRACT-regex_too_large /opt/splunk/etc/apps/search/local/props.conf EXTRACT-regex_too_large = (?<regex_too_large>.){3120}
Hi @jhooper33, There's no regular expression in the search itself, but you should be able to find the cause in search logs. For example, I've turned my "bad" regular expression into a field extract... See more...
Hi @jhooper33, There's no regular expression in the search itself, but you should be able to find the cause in search logs. For example, I've turned my "bad" regular expression into a field extraction, and the following error is logged: 12-13-2023 20:06:05.854 ERROR SearchOperator:kv [35240 searchOrchestrator] - Cannot compile RE \"(?<regex_too_large>.){3120}\" for transform 'EXTRACT-regex_too_large': Regex: regular expression is too large. I can then trace the configuration, which we can see is an inline EXTRACT in props.conf: | rest /servicesNS/-/-/configs/conf-props search="EXTRACT-regex_too_large=*" f=EXTRACT-regex_too_large In my example, EXTRACT-regex_too_large has the value (?<regex_too_large>.){3120}, which we know to be problematic form my last post: After you've identified your regular expression, post it here if you can.
Using a multisearch command may be useful here to help standardize some fields before piping the two datasets into a timechart command. Something like this I think should work. | multisearch ... See more...
Using a multisearch command may be useful here to help standardize some fields before piping the two datasets into a timechart command. Something like this I think should work. | multisearch [ | search index=rdc sourcetype=sellers-marketplace-api-prod "custom_data.result.id"="*" | fields + _time, index, "custom_data.result.id" ] [ | search index=leads host="pa*" seller_summary | spath input="Data" | search "0.lead.form.page_name"="seller_summary" | fields + _time, index, "0.id" ] | eval identifier=coalesce('custom_data.result.id', '0.id') | dedup index, identifier | timechart span=1h count as count by index | eval diff='leads'-'rdc'   Tried to replicate on my local instance with some similarly structured datasets and think it works out. Resulting dataset should look something like this. (But with your indexes of course)  
It may not be the most efficient method, but this should get you started. index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.id"="*" | dedup custom_data.res... See more...
It may not be the most efficient method, but this should get you started. index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.id"="*" | dedup custom_data.result.id | timechart span=1h count as count1 | append [ search index=leads host="pa*" seller_summary | spath input="Data" | search "0.lead.form.page_name"="seller_summary" | dedup 0.id | timechart span=1h count as count2 | stats values(*) as * by _time | eval diff = count1 - count2
If I understand the question correctly something like this may work. index=servicenow sourcetype=snow:incident | fields + _time, number, sys_updated_on, dv_u_last_update, dv_state, active, ... See more...
If I understand the question correctly something like this may work. index=servicenow sourcetype=snow:incident | fields + _time, number, sys_updated_on, dv_u_last_update, dv_state, active, dv_sys_class_name, dv_assigned_to | sort 0 +_time | eval dv_assigned_to=if( 'dv_assigned_to'=="", null(), 'dv_assigned_to' ) | eventstats earliest(dv_state) as first_state, earliest(sys_updated_on) as first_timestamp, values(dv_assigned_to) as assignees by number ``` only include events from inc# that fall into state=new as its first event in the search time window ``` | where 'first_state'=="New" | tojson str(sys_updated_on) str(dv_state) str(active) str(dv_assigned_to) output_field=snow_incident_json | stats values(first_timestamp) as first_timestamp, earliest(eval(case('dv_assigned_to'=='assignees', sys_updated_on))) as first_assignment_timestamp, list(snow_incident_json) as snow_incident_timestamp by number | foreach first*_timestamp [ | eval first<<MATCHSTR>>_epoch=strptime('<<FIELD>>', "%Y-%m-%d %H:%M:%S") ] | eval minutes_since_incident_creation_to_assignment=round(('first_assignment_epoch'-'first_epoch')/60, 2) | where 'minutes_since_incident_creation_to_assignment'>15 | fields - *_epoch  The resulting dataset should look something like this. I saw you mention that you needed to see the sequence of events that occurred for incidents that were unassigned for the initial 15 minutes after creation. You can see the details are packaged as a multivalue field of json_objects. You should be able to add any field you want to this by just including in the tojson command.  
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.i... See more...
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.id"="*" | dedup custom_data.result.id | timechart span=1h count   Query2: index=leads host="pa*" seller_summary | spath input="Data" | search "0.lead.form.page_name"="seller_summary" | dedup 0.id | timechart span=1h count I would like to write a query that executes Query1-Query2 for the counts in each hour. It should be in the same format. Thank you!!
I think Splunk can be finicky about some special characters in fieldnames when evaluating logic statements I think the same applies for fieldnames containing "{" or "}" and maybe even "."
You may need to put single quotes around your field in the where clause Example: | makeresults | eval "Fail%"=25 | where 'Fail%'>10
thanks, it helped . 
It happens for all users.
It's unfortunate that field_{<<ITEM>>}=<<ITEM>> does not work inside an MV foreach statement - the {} assignment does work if mode is not multivalue
You can use the @PickleRick solution just by adding his code after your appendpipe, but if you normally have a LOT of rows then the transpose and row processing may be very quite heavy. There is an ... See more...
You can use the @PickleRick solution just by adding his code after your appendpipe, but if you normally have a LOT of rows then the transpose and row processing may be very quite heavy. There is an alternate solution, not sure how this will perform with a large result set and how it differs to the other one. your base search with results... | appendpipe [ | stats count | where count=0 | eval error="None" | fields - count ] ``` Rename all fields to X_* ``` | rename * as X_* ``` Now move those fields to the real name if it's not null ``` | foreach X_* [ eval "<<MATCHSTR>>"=if(isnull(<<FIELD>>), null(), <<FIELD>>) ] ``` and remove all the original X_ fields, so that non-null fields remain ``` | fields - X_* With Splunk there is often more than one way to solve a problem 
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stat... See more...
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stats only when the failed % is > 10 % My query works fine  until the below index=abcd | eval status= case(statuscode < 400, "Success", statuscode > 399,"Failed") | stats count(status) as TOTAL  count(eval(status="Success")) as Success_count  count(eval(status="Failed")) as Failed_count  by Name, URL | eval Success%= ((Success_count /TOTAL)*100) | eval Failed%= ((Failed_count /TOTAL)*100) The above works and I get the table with Name URL TOTAL  Success_count   Failed_count   Success% Failed% Now, when I add the below to the above query, It fails  | where Failed% > 10 How do I get the failed% > 10 with the above table. Please assist