All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi How can I exclude this time range from search 23:55 to 06:00 I'm using below spl but minutes required. index="my-index"  NOT (date_hour>=23 date_hour<6)    any idea? Thanks,
Hi, Below is the defualt color of my bar chart.The color is light so how do i change these default color. The result of my query is not specific ,the result keeps on changing. Also,Is there wa... See more...
Hi, Below is the defualt color of my bar chart.The color is light so how do i change these default color. The result of my query is not specific ,the result keeps on changing. Also,Is there way through which i can give a space between the legend values (my right most defined values in the chart) as they are overlapped one on another? Thank You
How do I get a list of all Windows event codes being ingested into Splunk please?
Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for ea... See more...
Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for each.     index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now |stats count by host     So this gives me a count of that specific event for each of my hosts.  I want to know if one (or more) of these drops to zero in the last 5 mins.  All the hostnames are known so can be written into the query. Not really got close with this one so some help would be appreciated.  Thanks!
We are currently running Splunk Enterprise, on-prem on a Linux VM and have a search head, with several forwarders. How can we maintain the search functionality for historical log data, but stop/disa... See more...
We are currently running Splunk Enterprise, on-prem on a Linux VM and have a search head, with several forwarders. How can we maintain the search functionality for historical log data, but stop/disable logging of any new data, either a blanket stop for all hosts/forwarders, or individually? I think this can be done in the .conf file somehow? Is there an easier way to do this? Our Splunk System Admin has left our company and I am trying to get up to speed on how this would be done. Thanks,  
Hi, I am trying to fine tune our license consumption and I can easily check the total number of events that match certain criteria (e.g: certain windows event ID for example).  but how could I check... See more...
Hi, I am trying to fine tune our license consumption and I can easily check the total number of events that match certain criteria (e.g: certain windows event ID for example).  but how could I check the license consume by them? in other words, the total size of the data set of a query. doing this, I could decide to blacklist certain events knowing beforehand that this blacklist will save X amount of MB a day of license. cheers, Jose
I have got table, which contains field SSS with search patterns and another field FFF, to which I want apply search patterns in order to get records with matches. Something like: SSS               ... See more...
I have got table, which contains field SSS with search patterns and another field FFF, to which I want apply search patterns in order to get records with matches. Something like: SSS                  FFF *Tomcat*       /opt/app/tomcat/ *jquery*          libxml2-2.9.1-6.el7.5.x86_64 *                         Package Installed Version Required Version python-perf *jquery*           jQuery Version Prior to 3.5.0 Can't figure out case insensitive solution, which will return the first, third and fourth record.
Hi I've installed Splunk App for Instrastructure into 8.1 Splunk Enterprise. I've deployed splunk connect for k8 which is successfully indexing logs + metrics (not objects).   When i try to open S... See more...
Hi I've installed Splunk App for Instrastructure into 8.1 Splunk Enterprise. I've deployed splunk connect for k8 which is successfully indexing logs + metrics (not objects).   When i try to open SAI i get: "Unable to retrieve migration status. Please refresh the page. If the issue persists, please contact Support."    The logs `index=_internal sourcetype="splunk_app_infrastructure"` - see below, suggest an SSL issue..any suggestions? Thanks       2021-09-21 11:58:23,235 - pid:9882 tid:MainThread ERROR em_group_metadata_manager:94 - Failed to execute group metadata manager modular input -- Error: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106) Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_group_metadata_manager.py", line 92, in do_execute self.update_group_membership() File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/logging_utils/instrument.py", line 69, in wrapper retval = f(decorated_self, *args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_group_metadata_manager.py", line 64, in update_group_membership all_groups = EMGroup.load(0, 0, '', 'asc') File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_model_group.py", line 253, in load query=kvstore_query File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_base_persistent_object.py", line 91, in load data_list = cls.storage_load(**params) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/rest_handler/session.py", line 73, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 61, in storage_load data = cls._paged_load(limit, skip, sort, fields, query_str) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 74, in _paged_load store = cls.store() File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 36, in store store = svc.kvstore[cls.storage_name()] File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 1240, in __getitem__ response = self.get(key) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 1668, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 766, in get **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 686, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1194, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1252, in request response = self.handler(url, message, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1392, in request connection.request(method, path, body, head) File "/opt/splunk/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/opt/splunk/lib/python3.7/http/client.py", line 972, in send self.connect() File "/opt/splunk/lib/python3.7/http/client.py", line 1447, in connect server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)      
Is it possible to have WinHostMon events formatted as XML?
Hi Knowledgeable People  I have been provided a query which works successfully when run by the DBA on the server itself.  However, when run using DB Connect, 'no results' are found.  Please can so... See more...
Hi Knowledgeable People  I have been provided a query which works successfully when run by the DBA on the server itself.  However, when run using DB Connect, 'no results' are found.  Please can someone advise why this may be?  Thanks This is the query -  SELECT ar.replica_server_name, adc.database_name, ag.name AS ag_name, drs.is_local, drs.is_primary_replica, drs.synchronization_state_desc, drs.is_commit_participant, drs.synchronization_health_desc, drs.recovery_lsn, drs.truncation_lsn, drs.last_sent_lsn, drs.last_sent_time, drs.last_received_lsn, drs.last_received_time, drs.last_hardened_lsn, drs.last_hardened_time, drs.last_redone_lsn, drs.last_redone_time, drs.log_send_queue_size, drs.log_send_rate, drs.redo_queue_size, drs.redo_rate, drs.filestream_send_rate, drs.end_of_log_lsn, drs.last_commit_lsn, drs.last_commit_time FROM master.sys.dm_hadr_database_replica_states AS drs INNER JOIN master.sys.availability_databases_cluster AS adc ON drs.group_id = adc.group_id AND drs.group_database_id = adc.group_database_id INNER JOIN master.sys.availability_groups AS ag ON ag.group_id = drs.group_id INNER JOIN master.sys.availability_replicas AS ar ON drs.group_id = ar.group_id AND drs.replica_id = ar.replica_id ORDER BY ag.name, ar.replica_server_name, adc.database_name;
When I configuring threat feeds in ES . In  Intelligence Downloads setting there is Maximum age  for threat intel downloads. Does anyone know what is Maximum age field.  
Hello, I have an alert that checks cpu_usage and fires every minute. I need to make sure that with indicators over 60, he sends a letter to me by mail once. When the indicators are below 60, he sent ... See more...
Hello, I have an alert that checks cpu_usage and fires every minute. I need to make sure that with indicators over 60, he sends a letter to me by mail once. When the indicators are below 60, he sent me a second letter. I use kvstore as a repository. I'm new to splunk)
I have an eventtype that I want to delete, But before that I want to make sure that the eventtype isn't used anywhere , like in any datamodel, any correlation search, savedsearch , dashboard, tags et... See more...
I have an eventtype that I want to delete, But before that I want to make sure that the eventtype isn't used anywhere , like in any datamodel, any correlation search, savedsearch , dashboard, tags etc.... Is there a way , I can figure out where in the Splunk  an eventtype is used ?
How do I generate trendline to the below query index=os host=*gbcm* sourcetype=cpu VNextStatus=Live | timechart perc90(pctUser) span=10m by host_name
I have a dashboard with several multi-value fields containing IP details. I applied the following fieldformat command to truncate the result of such fields for the dashboard view. | fieldformat ipli... See more...
I have a dashboard with several multi-value fields containing IP details. I applied the following fieldformat command to truncate the result of such fields for the dashboard view. | fieldformat iplist=mvjoin(mvindex(iplist, 0, 9), ", ").if(mvcount(iplist)>10, " (".(mvcount(iplist)-10)." IPs truncated...)","") The goal is to create a field similar to the output below: 10.10.10.1, 10.10.10.2, 10.10.10.3, 10.10.10.4, 10.10.10.5, 10.10.10.6, 10.10.10.7, 10.10.10.8, 10.10.10.9, 10.10.10.10 (3 IPs truncated...) The fields are displayed in a dashboard table view according to the formatting, however when I try to drill down on these fields, the drilldown will carry over the formatted value, not the original multi-value content. I have included a test dashboard to demonstrate the behaviour. How can I modify the fieldformat command to truncate the field but also enable the dashboard to use the original field value in drilldowns? Thanks <form> <label>Fieldformat Test</label> <fieldset submitButton="false" autoRun="true"> <input type="text" token="tokIPList" searchWhenChanged="true"> <label>IP List</label> <default>10.10.10.1 10.10.10.2 10.10.10.3 10.10.10.4 10.10.10.5 10.10.10.6 10.10.10.7 10.10.10.8 10.10.10.9 10.10.10.10 10.10.10.11 10.10.10.12 10.10.10.13</default> </input> </fieldset> <row> <panel> <title>IP List input text displayed as multi value field</title> <table> <search> <query>| makeresults | fields - _time | eval iplist=$tokIPList|s$ | eval iplist=split(iplist, " ") | table iplist </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">row</option> <drilldown> <set token="tokDrilldown">$row.iplist$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>IP List input text displayed with fieldformat applied</title> <table> <search> <query> <![CDATA[ | makeresults | fields - _time | eval iplist=$tokIPList|s$ | eval iplist=split(iplist, " ") | table iplist | fieldformat iplist=mvjoin(mvindex(iplist, 0, 9), ", ").if(mvcount(iplist)>10, " (".(mvcount(iplist)-10)." IPs truncated...)","") ]]> </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">row</option> <drilldown> <set token="tokDrilldown">$row.iplist$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>Drilldown test</title> <table> <search> <query>| makeresults | fields - _time | eval formatted_iplist=$tokDrilldown|s$ | table formatted_iplist </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Hello everyone, I have successfully installed Splunk stream on a distributed environment. Stream data are indexed remotely and can be searched manually. I have a couple of questions to ask: 1) I a... See more...
Hello everyone, I have successfully installed Splunk stream on a distributed environment. Stream data are indexed remotely and can be searched manually. I have a couple of questions to ask: 1) I am initiating a 15min Ephemeral Stream using either Splunk ES Incident Review console (using the adaptive response action "Stream Capture" available). I select "All" protocols. I can see the Ephemeral Stream under "Configure Streams" UI. Even though it starts 9 streams, after 15mins the streams disappear. This means that the streams were empty? Normally they will have a link that I can click and search them? Can I export them for later use or as artifact in an investigation? 2) On which index do these Ephemeral Streams get captured/indexed? 3) Even though my streams are working and data come in, I see that my Configure Streams - Avg. Traffic and Recent Traffic per protocol (15m)  are all zero. Why does this happen? This applies for both the enabled and estimate streams. Thank you in advance for your help. With kind regards, Chris
Hi All, My organisation has installed a few custom actions on our instance of splunk which we are now able to trigger from alerts (i.e. when editing an alert, they appear in the drop down of the "Tr... See more...
Hi All, My organisation has installed a few custom actions on our instance of splunk which we are now able to trigger from alerts (i.e. when editing an alert, they appear in the drop down of the "Trigger Actions" section. What I would like to do is trigger these actions from a dashboard drilldown.  Is there a way to do this?   When I edit the drilldown the only options for actions that I see are: Many thanks in advance!
Hello everybody, i need to connect an instance of Oracle OAM to Splunk. Do you have any suggestion on how to achieve this?   Thanks in advance.    
hello I use a splunk app with many different dashboards and I have 2 imporvement to do 1) I need to put an icon after the name of my app Except if I am mistaken I put the icon file in the static f... See more...
hello I use a splunk app with many different dashboards and I have 2 imporvement to do 1) I need to put an icon after the name of my app Except if I am mistaken I put the icon file in the static folder but what I have to do for displaying the icon after the app name? 2) I need to open a PDF file from my dashboard The PDF is located in the "Static" folder and the "KM" directory Exemple : etc/apps/workplace/static/KM/TEST.pdf <row> <panel> <html> <p> <a target="_blank" href="/static/app/workplace/static/KM/TEST.pdf"> <img width="48" height="38">TEST </a> </p> </html> </panel> </row>  But I can't open it What is the correct path to use please? Rgds
Hello everyone, How to get/tag the registry services from windows server and display in dashboard showcasing as faulty or error. Please help me on this. Thanks and regards, Subhan