All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

According to the docs for cron the Sunday code is 0.   When I try to run this cron for the first Sunday of the month it displays Saturday! 00 12 1,2,3,4,5,6,7 * 0 Of course, when I use 6 for Sa... See more...
According to the docs for cron the Sunday code is 0.   When I try to run this cron for the first Sunday of the month it displays Saturday! 00 12 1,2,3,4,5,6,7 * 0 Of course, when I use 6 for Saturday, it works! 00 12 1,2,3,4,5,6,7 * 6 What code am I supposed to use for Sunday? TIA! David
Please advise on my request. Line from request: | where ('result.code'=-1 OR 'result.code'=1 OR 'result.code'=21 OR 'result.code'=23 OR 'result.code'=SMEV-403) The query looks for messages with a... See more...
Please advise on my request. Line from request: | where ('result.code'=-1 OR 'result.code'=1 OR 'result.code'=21 OR 'result.code'=23 OR 'result.code'=SMEV-403) The query looks for messages with all result.code values except SMEV-403. Tell me how can I fix this?
Hi, I have been using Splunk actively for three months. I have created custom insights in AWS security hub to monitor continuous compliance tasks. But, these are not setup to send alerts when there ... See more...
Hi, I have been using Splunk actively for three months. I have created custom insights in AWS security hub to monitor continuous compliance tasks. But, these are not setup to send alerts when there is a change in the number of failed resources. I understand it is possible to create these AWS insights in Splunk, and setup alerts when there is a change. How is this done? I imagine these would be standard searches that anyone can use.
Hello all, New splunker here, so forgive me if this is totally way wrong to do it. I was asked to make a comparison dashboard for application performance before the monthly patch and after. I was ... See more...
Hello all, New splunker here, so forgive me if this is totally way wrong to do it. I was asked to make a comparison dashboard for application performance before the monthly patch and after. I was able to do so with the following code:         index=erp sourcetype=erp_heartbeat tenant=AX2 earliest=-31d@month latest=-1d@month | eval custate="Post-Update" | append [ search index=erp sourcetype=erp_heartbeat tenant=AX2 earliest=-61d@month latest=-30d@month | eval custate="Pre-Update" ] | chart avg(duration) by trans_name, custate       In a recent touchpoint, it was requested that users be able to change the dates to look at prior months' numbers. I can't figure out how to accomplish this as I'm using specific earliest and latest time modifiers   , so any help would be tremendously appreciated.  Thank you all, as I've gotten this far with your community. 
I destroyed some VMs in my production recently. But I can still them in the page: "IT Essentials Work" => "Infrastructure Overview" => Unix/Linux Add-on. They have inactive status already but how do ... See more...
I destroyed some VMs in my production recently. But I can still them in the page: "IT Essentials Work" => "Infrastructure Overview" => Unix/Linux Add-on. They have inactive status already but how do I completely remove them from the Splunk Cloud? I don't want to monitor them anymore.
My task is like I need to group by two fields i.e  eventid and dest  make it happened at  firsttime and lasttime    eventid      dest                                    count                firstti... See more...
My task is like I need to group by two fields i.e  eventid and dest  make it happened at  firsttime and lasttime    eventid      dest                                    count                firsttime                                     lasttime  256             drdydyf.google.com    56                  2022-09-28T19:21:10             2022-09-28T19:21:34   249               bigdaddy.com         78                         2022-09-28T19:22:10              2022-09-28T19:22:20
I have an SPL which gives a result. I want to get a trend of the result.  So I tried using timechart command, but it is not working.    Query | tstats `summariesonly` earliest(_time) as _time fro... See more...
I have an SPL which gives a result. I want to get a trend of the result.  So I tried using timechart command, but it is not working.    Query | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | join rule_id [| from inputlookup:incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=review_time-_time | stats avg(ttt) as avg_ttt | sort - avg_ttt | `uptime2string(avg_ttt, avg_ttt)` | rename *_ttt* as *(Time_To_Triage)* | fields - *_dec |table avg(Time_To_Triage) |rename avg(Time_To_Triage) as "Mean/Average Time To Respond"    
Hi,  I  have a lookup file with the fields - biz_department, biz_unit, biz_owner, data_usage I have a query to generate the "datausage" values based on biz_unit. I will schedule the report so that ... See more...
Hi,  I  have a lookup file with the fields - biz_department, biz_unit, biz_owner, data_usage I have a query to generate the "datausage" values based on biz_unit. I will schedule the report so that it will update only the "data_usage" values in the lookup file periodically. How can i call the lookupfile and update only specific field? Thanks MS
Hello I have basic questions about hte way to geolocate devices with Splunk Is an addon exists? If not, is it possible to correlate a tool like NetDB with Splunk using DB Connect? https://web... See more...
Hello I have basic questions about hte way to geolocate devices with Splunk Is an addon exists? If not, is it possible to correlate a tool like NetDB with Splunk using DB Connect? https://web.stanford.edu/group/networking/netdb/help/prod/netdb.html If yes, what are the prerequesites for doing this? Thanks
The below search is intended to get status codes from two different sources and put them together in a table. It works except that it keeps codes separate if they come from different searches. In the... See more...
The below search is intended to get status codes from two different sources and put them together in a table. It works except that it keeps codes separate if they come from different searches. In the table at the bottom, I want only one row for 504, with entries for both searches and the sum (=5).  | multisearch [search index=ABC status.code>399 | rename status.code as StatusCode | eval type="search1"] [search index=DEF data.status>399 | rename data.status as StatusCode | eval type="search2"] | chart count over StatusCode by type | eval sum = search1+search2 StatusCode search1 search2 sum 1 400 17 0 17 2 406 10 0 10 3 500 647 0 647 4 504 0 1 1 5 504 4 0 4 6 530 8 0 8
Hello, I apologize if this is not in the correct location. Basically, in order to simplify things, let's say I have a dashboard with 2 panels doing 2 seperate search queries. Both are a "| stats ... See more...
Hello, I apologize if this is not in the correct location. Basically, in order to simplify things, let's say I have a dashboard with 2 panels doing 2 seperate search queries. Both are a "| stats count" and they both return values. What I want to do is change the color of the value of Panel A based on the result of Panel B, and if for example the value of Panel B is 50% larger or smaller than Panel A, then the value of Panel A should turn yellow. But I don't know how to turn the value of a panel into a variable or a token and use that variable or token to create a range based on a % of that value. Is this possible? I did some research in splunk doccumentation and I thought I found a way to make it work but I don't know if it works because I'm not able to get it working. Basically I tried doing <set token="result_token">$job.resultCount$</set> which in my mind would use the result count of the panel and convert that result into the "result_token" and then I would be able to use that "result_token" to do something like <option name="drilldown">none</option> <option name="rangeColors">["yellow","purple"]</option> <option name="rangeValues"> "result_token" > 50% = yellow</option> <option name="rangeValues"> "result_token" < 50% = yellow</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> I don't know if I made any sense.
Hi,  I have an index that return some logs with fields like _time, api names. I would like to display in dashboard or report or alert that which API has been inactive for more than one week. What I... See more...
Hi,  I have an index that return some logs with fields like _time, api names. I would like to display in dashboard or report or alert that which API has been inactive for more than one week. What I do right now is that find the most recent time with function latest(_time) and compare this with now, using relative_time. It works but the time range is all time and it takes some seconds to do so. I am worried that as the time goes it would take too long to get result.  Is there some better way to achieve that? 
I need to personalize the "Data Processing Queues" monitored made by Monitoring Console. I found that "median" aggregate function, on stats or timechart commands does not work correctly. Indeed, ... See more...
I need to personalize the "Data Processing Queues" monitored made by Monitoring Console. I found that "median" aggregate function, on stats or timechart commands does not work correctly. Indeed, launching the following search, over "all time" on  my PC (host=localhost), I obtain that median is 0 if on values there is a 0. In the example attached, the correct median is 0.73, instead Splunk calculate 0.   (group=queue host=localhost index=_internal name=* source=*metrics.log sourcetype=splunkd) | eval ingest_pipe=if(isnotnull(ingest_pipe),ingest_pipe,"none") | search ingest_pipe=* | where match(name,"agg") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size), curr=if(isnotnull(current_size_kb),current_size_kb,current_size), fill_perc=round(((curr / max) * 100),2) | timechart minspan=30s Median(fill_perc) values(fill_perc) avg(fill_perc) useother=false limit=15       Anyone else found this issue ?  
I'm currently working on a project that maps different events at different times in different service areas, and so far I've had a lot of luck with geostats. I'm fairly new to Splunk, SQL and XML but... See more...
I'm currently working on a project that maps different events at different times in different service areas, and so far I've had a lot of luck with geostats. I'm fairly new to Splunk, SQL and XML but have been able to do a lot on my own. I have two questions: 1. Each event that is accumulated in the geostats map has a value assigned to it (between 0 and 7) in a particular field. Is there a way for me to assign a color to each value? I want to be able to look at the map and be able to discern between these different values. 2. I also created a table with these values with the Lat, Long, Time and Value [the 0-7] but I want to be able to link it to my geostats map. Is there a way that one could highlight/reveal the plot point, either when hovering over a row or clicking on it within the table? I'll number and post both search strings: 1. Geostats map:       source="e:\\folder" | rex field=_raw "longitude:(?&lt;long&gt;.*) latitude:(?&lt;lat&gt;.*)" | rex field=_raw "value_id:(?&lt;Value&gt;.*)" | rex field="date_hour" "(?P&lt;Time&gt;[^\s]+)" | search long!="null"| search lat&gt;"0" | eval n=tonumber(long)| eval n=tonumber(lat) | eval lat=printf("%.*f", 8, lat) | eval long=printf("%.*f", 8, long) | eval Time=strftime(_time, "%b-%d %H:%M:%S.%Q") | geostats count longfield=long latfield=lat translatetoxy=true maxzoomlevel=10       2. Table:       source="e:\\folder" | rex field=_raw "longitude:(?&lt;long&gt;.*) latitude:(?&lt;lat&gt;.*)"| rex field=_raw "value_id:(?&lt;Value&gt;.*)" | rex field="date_hour" "(?P&lt;Time&gt;[^\s]+)" | search long!="null"| search lat&gt;"0"| eval n=tonumber(long)| eval n=tonumber(lat) | eval n=tonumber(Value)| eval long=long*-180/pow(2, 23) | eval lat=lat*90/pow(2, 23) | eval lat=printf("%.*f", 8, lat)| eval Balue=printf("%.1s",Value) | eval long=printf("%.*f", 8, long) | eval Time=strftime(_time, "%b-%d %H:%M:%S.%Q") | table lat, long, Time, Value       Also if anyone has any criticism to how I can clean this up let me know. Again, I'm fairly new to this. Thanks!
Hello,  Currently attempting to set up a Custom Java Endpoint to retrieve LDAP URL that is being monitored by the Java Agent.  I have attempted several configurations but none seem to be working.  I... See more...
Hello,  Currently attempting to set up a Custom Java Endpoint to retrieve LDAP URL that is being monitored by the Java Agent.  I have attempted several configurations but none seem to be working.  I am currently using this doc: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/configure-instrumentation/backend-detection-rules/java-backend-detection/custom-exit-points-for-java#id-.CustomExitPointsforJavav22.1-LDAPExitPoints However, it still only recovers the automated backend LDAP information.  Any help would be appreciated.  Thanks!
Hello, Anyone know if its possible to pull back the time from all the Splunk infrastructure.  I have over 200 IDX / SHD / DEP etc etc server.  In 4 Regions around the world.  And I think my NTP is ... See more...
Hello, Anyone know if its possible to pull back the time from all the Splunk infrastructure.  I have over 200 IDX / SHD / DEP etc etc server.  In 4 Regions around the world.  And I think my NTP is failing / drifting.  And I what to show my IT Dept the problem if we have one. So is it possible to ask, all the Splunk infrastructure the time.  So I can see / show at a glance oh that IDX server is 5 mins out from its cluster buddy's? Thanks.
I have two SPlunk consoles - one has alerting, the other does not.  How do I add alerting to the one that doesn't have it. I do not have "Save as Alert"
Hi Community Support, I have a lookup file with IP addresses where all the values are IP Addresses including the very first field and its keep changing. Dummy Example: 192.168.10.10 192.168.1... See more...
Hi Community Support, I have a lookup file with IP addresses where all the values are IP Addresses including the very first field and its keep changing. Dummy Example: 192.168.10.10 192.168.10.11 192.168.10.12 Because the very first field value itself is an IP address so I want to add a field value into this lookup via Splunk search so that my lookup will show like below: ip_address 192.168.10.10 192.168.10.11 192.168.10.12 Kindly suggest how to achieve these results. Many Thanks.
Hi ,   i want to find the license utilization of  firewall logs based on severity level. can anyone help me with the query on how to find the license utilization based on particular events like e... See more...
Hi ,   i want to find the license utilization of  firewall logs based on severity level. can anyone help me with the query on how to find the license utilization based on particular events like eventid in windows logs
Hi, We have created the aggregation policies and configure the action rules to create a ticket. We have a requirement to prevent the ticket getting created for few of the hosts. How to define t... See more...
Hi, We have created the aggregation policies and configure the action rules to create a ticket. We have a requirement to prevent the ticket getting created for few of the hosts. How to define the filtering criteria to exclude the hosts so that the ticket will bot be created for them? and will the episodes get created in this case? Please clarify. Thanks.