All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Not sure why this is so hard...   Wana go back say 7/30/90 days and stats count number of alerts per analyst. Ie who closed out the top or top true positives etc   Then want the ability to drill ... See more...
Not sure why this is so hard...   Wana go back say 7/30/90 days and stats count number of alerts per analyst. Ie who closed out the top or top true positives etc   Then want the ability to drill in and do stats count on any specific analysts for what those alerts were by rule/alert/notable name... To see what they were mainly dealing with   Why is this so hard...
Hi all, I have specific situation where I need to roll buckets form hot to warm on a daily basis, for an index with very low volume of aprox. 20-30MB each day. The problem is that the config maxHot... See more...
Hi all, I have specific situation where I need to roll buckets form hot to warm on a daily basis, for an index with very low volume of aprox. 20-30MB each day. The problem is that the config maxHotSpanSecs = 86400 is not being respected, and but the buckets remain in hot state. I tried also several combinations with maxHotBuckets=2 or auto, but nothing changes. Can someone help? When does Splunk decide to create new hot buckets? Is it possible that the input volume is so low (in comparison with the default maxDataSize of 750MB) that Splunk won't enforce the maxHotSpanSecs of 1 day?      
We are using Splunk Cloud  and the Cloud Monitoring Console provides a graph showing the KB/s and Events/s per forwarding instance and I would like to manipulate this query to provide the total avera... See more...
We are using Splunk Cloud  and the Cloud Monitoring Console provides a graph showing the KB/s and Events/s per forwarding instance and I would like to manipulate this query to provide the total average per day across a number of different forwarders, does anyone know what the search being used is?    
I have successfully implemented the Splunk Java SDK to write my own .class and implement the code within programs we run on several machines to send and retrieve data. I am able to retrieve search re... See more...
I have successfully implemented the Splunk Java SDK to write my own .class and implement the code within programs we run on several machines to send and retrieve data. I am able to retrieve search results as a List of <Event> or submit data to the index. However, Splunk does not format my submitted data to the main index when presented as JSON.  If I run the following code, the JSON is somehow formatted in the Splunk interface as a JSON with it's red/green formatting HOWEVER the data is not formatted through Type Event and therefore I can not search data based on an "application=test" String:   Index myIndex = service.getIndexes().get("main"); eventArgs.put("sourcetype", "_json"); String input = "{\"account\": \"test\",\"password\": \"Welkom\",\"hostname\": \"DESKTOP-KENNETH\",\"application\": \"test\"}"; myIndex.submit(eventArgs, input);   How do I need to submit a JSON where Splunk will recognize this as a Type Event with it's corresponding key/value pairs? This search will not retrieve the submitted JSON:   index=main application="test"   Type Event now has only one key/value pair: "timestamp: none".
Hi team,   i am getting below error while trying to login with splunk using java sdk. Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection reset Suppressed: java.net.Soc... See more...
Hi team,   i am getting below error while trying to login with splunk using java sdk. Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection reset Suppressed: java.net.SocketException: Broken pipe (Write failed)  
I'm doing Splunk search at 5 minute intervals. Getting data every 5 minutes. For example,  earliest="07/10/2021:07:35:00" AND latest="07/10/2021:07:40:00", next interval will be earliest="07/10/202... See more...
I'm doing Splunk search at 5 minute intervals. Getting data every 5 minutes. For example,  earliest="07/10/2021:07:35:00" AND latest="07/10/2021:07:40:00", next interval will be earliest="07/10/2021:07:40:00" AND latest="07/10/2021:07:45:00".    So the question is, when we have a log at exactly 07:40:00, will the log duplicate on both the intervals? I've done some analysis on the same. Splunk will provide the data in milliseconds, 07:40:00.000 The millisecond position can go up to, "07:40:00.000000". When I try to search this, the log time is considered as 07:39:59.999 but it's going in to the 07:40 to 07:45 interval and is not being duplicated.    Why is "07:40:00.000000" considered as 07:39:59.999? If it is considered as 07:39:59.999 why is it going to the latter time interval? Is this splunk's mechanism to avoid duplication? Can someone please explain how earliest time and latest time is considered by splunk in milliseconds when doing the search?
HI, I try to figure out a way to create a new field on a heavy forwarder. I want to add the field "splunk_parser" to every event, like the field "splunk_server", to be able to understand where an ... See more...
HI, I try to figure out a way to create a new field on a heavy forwarder. I want to add the field "splunk_parser" to every event, like the field "splunk_server", to be able to understand where an event was parsed and from which HF the event is coming. The best way would be to get the hostname from the linux machine directly. For a small amount of HF I could specify the hostname manually, but not for a bigger number of HFs. So what I came up with is this:     props.conf [host::*] TRANSFORMS-splunk_parser= splunk_parser_ext transforms.conf [splunk_parser_ext] INGEST_EVAL = splunk_parser="<hostname_HF>" fields.conf [splunk_parser] INDEXED=true      Is there a way to get the <hostname_HF> automatically assigned? with a token or extracted from the default fields?! Any hint is highly appreciated. Thank you David
Hi I have path that every day logs copy to there /opt/splunk/logs/$DATE I create script that copy logs there but sometime logs not copy on that path and script just create empty directory with cur... See more...
Hi I have path that every day logs copy to there /opt/splunk/logs/$DATE I create script that copy logs there but sometime logs not copy on that path and script just create empty directory with current date /opt/splunk/logs/20210712   How can know when directory is empty with splunk? FYI: this path continuously index on splunk  Any idea? Thanks
Hi What is the regex for this "WFLY*:" I want to get all jboss error code start with (WFLY) and (star wildcard) till colon.  here is the sample log: 2021-07-11 23:59:02,091 ERROR APP [ACTION] WFL... See more...
Hi What is the regex for this "WFLY*:" I want to get all jboss error code start with (WFLY) and (star wildcard) till colon.  here is the sample log: 2021-07-11 23:59:02,091 ERROR APP [ACTION] WFLYEJB0034: EJB Invocation failed on component Module for method public common.platform string.platform   Any idea Thanks,
I want to fetch the results from triggered alerts  from time T1 to T2. Tried passing the earliest_time or earliest query params but it didn't work. Can someone please let me how to pass the time fil... See more...
I want to fetch the results from triggered alerts  from time T1 to T2. Tried passing the earliest_time or earliest query params but it didn't work. Can someone please let me how to pass the time filter params to the following rest apis https://splunk1:8089/servicesNS/nobody/-/alerts/fired_alerts/-?output_mode=json  
hello   I need to display a single panel with trend but it doesnt works does it miss something?   <dashboard> <label>VIZ</label> <row> <panel> <single> <search> <... See more...
hello   I need to display a single panel with trend but it doesnt works does it miss something?   <dashboard> <label>VIZ</label> <row> <panel> <single> <search> <query>`CPU` | fields process_cpu_used_percent host | search host=06999 | stats count as "Number of peaks" by host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="useColors">1</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendInterval">-1h</option> <option name="underLabel">Compared to an hour before</option> </single> </panel> </row> </dashboard>     Thanks
Hi, For "Endpoint datamodel" with specific to "sysmon" sourcetype, what are all the mandatory fields?    
I have Indexer clustering, SH clustering in a distributed environment. 
This is my sentence but is not completed. I can't find the solution on Doc.   index=main sourcetype=acc* action=view [search sourcetype=acc* status=200 action=view | top limit=5 referer_domain | ta... See more...
This is my sentence but is not completed. I can't find the solution on Doc.   index=main sourcetype=acc* action=view [search sourcetype=acc* status=200 action=view | top limit=5 referer_domain | table referer_domain productName] | stats count,values(productName),distinct_count(productId) by referer_domain
Hello Splunkers   i want to print events for only the users who has failed login attempts but never allowed attempts. here's my search index=MyApp eventype=authentication action=fail user=* but t... See more...
Hello Splunkers   i want to print events for only the users who has failed login attempts but never allowed attempts. here's my search index=MyApp eventype=authentication action=fail user=* but this one prints all failures even if they have other successful attempt. i only want users with only failed attempts without other successful attempts, i hope the picture below clears things: green: user only have successful logins Yellow: user have both successful/failed logins Red: user only have failed logins   i want the red area only   Thanks
Hi I have log file like this: 2021-07-06 11:09:18,610 INFO   [deployment] WFLYSRV0027: Starting deployment of "APPS-7.1.2-CUS.war" (runtime-name: "APPS-7.1.2-CUS.war") I want to create release tim... See more...
Hi I have log file like this: 2021-07-06 11:09:18,610 INFO   [deployment] WFLYSRV0027: Starting deployment of "APPS-7.1.2-CUS.war" (runtime-name: "APPS-7.1.2-CUS.war") I want to create release timeline (event timeline viz) like this: https://cdn.apps.splunk.com/media/public/screenshots/25938208-2138-11e9-9f51-0a7dd926fc04.png   CUS= means customers (need to show in left, groupby CUS e.g CUS1,CUS2,…) 7.1.2=release number (as labels, groupby versions 1.1.2 , 7.1.2 , …)   Any idea? Thanks
Hello, I would like to have a Multiselect field on a dashboard and want to add options to group values on the drop down list. For example, I want to have options options like: All states Western ... See more...
Hello, I would like to have a Multiselect field on a dashboard and want to add options to group values on the drop down list. For example, I want to have options options like: All states Western states Eastern states AL AK ... where I want to define "Western states" as CA or OR or WA. Can you illustrate the proper syntax then for this option so when "Western states" is selected, it would do an OR on CA or OR or WA? Thank you. <choice value="CA ???????">Western states</choice> Code: <input type="multiselect" token="state" searchWhenChanged="true"> <label>States</label> <fieldForLabel>stcode</fieldForLabel> <fieldForValue>stcode</fieldForValue> <search> <query>|inputlookup somelookup | dedup stcode | sort stcode</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <valuePrefix>stcode="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <choice value="*">All states</choice> <choice value="CA ???????">Western states</choice> </input>  
Hi all, I'm trying to configure log collection from local machine, but this error keeps coming up once I try to submit which logs I want.   Encountered the following error while trying to update: ... See more...
Hi all, I'm trying to configure log collection from local machine, but this error keeps coming up once I try to submit which logs I want.   Encountered the following error while trying to update: Splunkd daemon is not responding: ('Error connecting to /servicesNS/nobody/Splunk_TA_windows/data/inputs/win-event-log-collections/localhost: The read operation timed out',) I recently changed my password and I heard that could cause this issue. But if so, I'm not sure where to go to configure the solution. Anyone know if this is truly the issue?
Compare the row value with the above row value , if the above row value is grater than the  present row value, it should be replaced with the above value, otherwise remains same. i have a ... See more...
Compare the row value with the above row value , if the above row value is grater than the  present row value, it should be replaced with the above value, otherwise remains same. i have a table like the above image, you can observe date wise sum data , third row values should compare with the  second row , if it is grater than the above value it should replace with the higher value> if anyone suggest me any other method , it would be also appreciated.  i think you guys understand, what i requested , please help me out