All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a index with thousands of operating systems (OS).  I want to remove unwanted operating systems (OS) from my report using wild cards as many of the unwanted share the same value as part of the ... See more...
I have a index with thousands of operating systems (OS).  I want to remove unwanted operating systems (OS) from my report using wild cards as many of the unwanted share the same value as part of the OS. Here is what I'm trying to do: earliest=-15d@d index="asset" sourcetype="Tenable:SecurityCenter:Asset" WHERE operating_system NOT "[APC*" OR "[AIX*" | stats count by operating_system I want to remove OS that have APC or AIX ( and others not listed) from the query.  But I can't use a wildcard which would mean hundreds of entries just for APC and all the versions I want to exclude.  I've tried NOT IN, NOT LIKE, != and more but either nothing is returned or what I want filtered out is not filtered and all events are returned.  Suggestions?
Greetings Splunk Community! I've looked through the pages here and haven't been fortunate to find a working answer that matches what I'm looking for. I'm trying to compare an event within the past 2... See more...
Greetings Splunk Community! I've looked through the pages here and haven't been fortunate to find a working answer that matches what I'm looking for. I'm trying to compare an event within the past 24 hours against the average events seen in the past week or month. Below are some threads which seemed similar to my question. https://community.splunk.com/t5/Splunk-Search/Using-timewarp-to-compare-average-of-last-30-days-to-current-24/m-p/557919 https://community.splunk.com/t5/Splunk-Search/Need-help-on-how-to-alert-if-daily-count-exceed-30-days-average/m-p/549636<---Unable to get this modified to work as desired Below is a screenshot of the search and output.  It appears to me that the Eval statement is just taking the count of Today and dividing it by 7. It is not producing an actual 7 day average of the past week. I feel like I'm overlooking something obvious, but at the moment it is escaping me.
Here is the query I am starting with: index=anIndex sourcetype=aSourceType ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), "A", like(_raw, "%StringB%"), "B") | timechart span=10... See more...
Here is the query I am starting with: index=anIndex sourcetype=aSourceType ("StringA" OR "StringB") | eval type=case(like(_raw, "%StringA%"), "A", like(_raw, "%StringB%"), "B") | timechart span=10m count by type | eval Percentage=round((A/B)*100,2) | eval Threshold-90=90 | eval Threshold-75=75 | fields + _time, Percentage, Threshold-90, Threshold-75   First off I am not 100% sure the above query is correct but I do get data that I can chart into a dashboard.  Trying to show 'Percentage' over time. I using this in a dashboard and am using in the chart overlay (Percentage, Threshold-90, Threshold-75) and the resulting timechart graph appears to be showing the calculated Percentage correctly in 10 minute intervals.   What I am wondering if it is possible to make the calculation (Percentage) using data looking back X hours. From the way I think its working above is that at each percentage calculation for A & B in the graph it is the number of occurrences of "StringA" & "StringB" for the current 10 minutes in the graph.
hello I dont succeed to sort the events by time the format time field is for example :   1632218561 what is wrong please?     index="tutu" sourcetype="toto" | search statustext=TimedOut | so... See more...
hello I dont succeed to sort the events by time the format time field is for example :   1632218561 what is wrong please?     index="tutu" sourcetype="toto" | search statustext=TimedOut | sort - time | eval time = strftime(_time, "%d-%m-%y %H:%M") | stats last(time) as Heure, last(statustext) as statustext by desktop      
Hello, I'm building some dashboard statistics from telecom data. I have a data source as follows  : _time OfferedTime  PickedUpTime Offered="0/1" Handled="0/1" _time is populated with Offered... See more...
Hello, I'm building some dashboard statistics from telecom data. I have a data source as follows  : _time OfferedTime  PickedUpTime Offered="0/1" Handled="0/1" _time is populated with OfferedTime   User can use a Time picker that is generating a token. I'm manipulating this token by going 5 days in the past for earliest and 5 days in the future for latest in dashboard to get a wider data set than the one selected by the user. And then using variables in the search to restore time boundaries to initial selection that I use for some specific calculation (not shown in the code sample). I'm trying to Timechart some metrics and to remove all data that is out of the time range initially selected by the user :   [MYSEARCH] | addinfo | eval end=if(info_max_time=="+Infinity",now(),info_max_time)-432000 | eval beginning=if(info_min_time=="0.000",1604260193,info_min_time)+432000 | eval DateBegin = beginning | eval DateEnd = end | eval FormatTime = _time | timechart count(eval(if(strptime(OfferedTime,"%Y-%m-%d %H:%M:%S.%Q") > beginning and strptime(OfferedTime,"%Y-%m-%d %H:%M:%S.%Q") < end,Offered,null()))) as OfferedCalls count(eval(if(Handled="1" AND strptime(PickedUpTime,"%Y-%m-%d %H:%M:%S.%Q") > beginning and strptime(PickedUpTime,"%Y-%m-%d %H:%M:%S.%Q") < end AND BindType_ID!=4 AND BindType_ID!=5,Handled,null()))) as HandledCalls | where _time > beginning and _time < end   I added DateBegin / DateEnd / FormatTime as I wanted to make sure in events tab that my dates had the correct format and could be compared. _time OfferedTime DateBegin DateEnd FormatTime 21/09/2021 18:24:54,000 2021-09-21 18:24:54.0 1630926000.000 1632223379.000 1632241494.000 The result of this search is ... no results found. If I go to events tab, copy the DateBegin and DateEnd and change my search to :   | where _time > 1630926000.000 and _time < 1632223379.000   It works fine and I get the expected result... I don't understand why... If I don't put where condition at the end I get this result : _time OfferedCalls HandledCalls 2021-09-04 0 0 2021-09-05 0 0 2021-09-06 156 115 2021-09-07 215 174 2021-09-08 280 217 2021-09-09 227 176 2021-09-10 223 184 2021-09-11 0 0 2021-09-12 0 0 2021-09-13 336 254 2021-09-14 285 220 2021-09-15 228 172 2021-09-16 243 177 2021-09-17 273 197 2021-09-18 0 0 What I'm trying to get is :  _time OfferedCalls HandledCalls 2021-09-06 156 115 2021-09-07 215 174 2021-09-08 280 217 2021-09-09 227 176 2021-09-10 223 184 2021-09-11 0 0 2021-09-12 0 0 2021-09-13 336 254 2021-09-14 285 220 2021-09-15 228 172 2021-09-16 243 177 2021-09-17 273 197   Basically getting rid of the data before / after my date range (beginning / end) without losing the 0 values which are inside the time range. I tried to play with various functions to replace 0 with NULL outside the range but couldn't manage to have this apply only outside my time range,. If anybody has an idea on how to solve this issue that would be great. Thanks in advance !
Hi there, My app's setup page leverages react, reactDOM, and bluebird stored under vendor folder. Previously the setup page worked well on both Chrome and Safari. But today both browser returned err... See more...
Hi there, My app's setup page leverages react, reactDOM, and bluebird stored under vendor folder. Previously the setup page worked well on both Chrome and Safari. But today both browser returned errors like this when loading the setup page. I tried to re-upload those three dependency files but it remained the same.     However, I can still load and access the setup page via Chrome with security mode turned off.   Could anyone help on this? How can I get rid of these errors when loading with normal browsers? Thank you!
AppD is detecting transactions using custom include rules, in tiers not contained in the scope, for the rules that are detecting the transactions. That is happening, despite me having higher priorit... See more...
AppD is detecting transactions using custom include rules, in tiers not contained in the scope, for the rules that are detecting the transactions. That is happening, despite me having higher priority custom include rules, that should detect those transactions. The rules that are detecting the transactions, should not be detecting anything in the tier they are detecting the transactions in, because the scope does not include the tier where the transaction is being detected. But even so... the custom rule that I have in place, SHOULD override those rules, based on priority, even if the tier was included in the scope of the rules which are detecting the transactions. The only way I have been able to get my custom include rule to detect the transactions, is to completely disable the 2 lower priority custom include rules, which are masking my rule (but should not be). I have tried both including the tier (rate-service-3534853), in the scope (AllTierse), and excluding the tier (rate-service-3534853) in the scope (AllTierse), and the effect is the same... both rules: 'default-Servlet-catchall,' and 'Default-Spring Bean - Catchall,' continue to detect transactions in the 'rate-service-3534853' tier... and in doing so, are masking my higher priority custom include rule. What gives?   ('AllTierse' scope showing that 'rate-service-3534853' is excluded from the scope.  But tried it both included, and excluded.  Made no difference.  Rules using this scope are invoked, regardless. (Custom Match rule for 'default-Servlet-catchall' rule.   Low priority, '1'.) (Custom Match rule for 'Default-Spring Bean - Catchall' rule.   Low priority, '1'.) (Transaction Detection snapshots showing 'default-Servlet-catchall' rule, and 'Default-Spring Bean - Catchall' detecting transactions in the 'rate-service-3534853' tier. (Configuration showing that  'default-Servlet-catchall' rule, and 'Default-Spring Bean - Catchall' are not even applied to the 'rate-service-3534853' tier... and yet, both of those rules are masking the rules I have highlighted in the screenshot below).
Hi I was able to install and configure the AMP for Endpoints Event Inputs App for all Event Types and Groups. However, not sure why, when I do a search in Splunk, index=* sourcetype="cisco:amp:event... See more...
Hi I was able to install and configure the AMP for Endpoints Event Inputs App for all Event Types and Groups. However, not sure why, when I do a search in Splunk, index=* sourcetype="cisco:amp:event", I can only see AMP4E events like from 8 hours ago, I am not able to see any of the recent AMP4E events
I'm trying to extract 1 fields from a log line. Just trying to extract the email. I cant extract a single field  and i get an error saying my rex has exceeded configured match_limit, consider raisin... See more...
I'm trying to extract 1 fields from a log line. Just trying to extract the email. I cant extract a single field  and i get an error saying my rex has exceeded configured match_limit, consider raising the value in limits.conf. Any suggestion of where I am doing wrong? Is that possible is my rex(as below) not right? Used the splunk cloud field extractor   Error in 'rex' command: regex="(?ms)^\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\+\d+:\d+\s+\d+\.\d+\.\d+\.\d+\s+\w+\-\w+\s+\d+\s+\-\s+\[\w+\s+\w+="\d+"\]\s+\w+\s+\w+\s+\w+\s+\w+:\s+<\d+>\d+\s+\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\-\d+:\d+\s+\w+\-\w+\-\w+\s+\-\s+\-\s+\w+>@<\s+\{\s+"\w+":\s+"\d+\.\d+",\s+"\w+":\s+"\w+",\s+"\w+":\s+"\d+\-\d+\-\d+\w+\d+:\d+:\d+\.\d+\w+",\s+"\w+":\s+"\w+\d+\w+\d+\w+\d+\w+/\d+\w+\d+\w+\d+\w+\d+\w+/\d+\w+\d+\w+=",\s+"\w+":\s+\{\s+"\w+":\s+"\w+\d+\w+\-\w+",\s+"\w+":\s+"\d+\.\d+\.\d+\.\d+",\s+"\w+":\s+"(?P<sss>[^"]+)" has exceeded configured match_limit, consider raising the value in limits.conf   Sample logs 2018-10-14T12:55:30.418+00:00 10.3.4.150 syslog-ng 176 - [meta sequenceId="100000"] Error processing log message: <14>1 2018-10-21T08:55:30.791523-04:00 CB-ID-SCT - - RemoteLogging>@< { "logVersion": "1.0", "category": "AUDIT", "timeStamp": "2021-09-21T12:53:16.879Z", "id": "vy1m6dhu0xlrRdo0se5IJmWQnR8mPb+QpeFcILHySTU=", "context": { "tenantId": "ZZNXA0OELD-STA", "originatingAddress": "54.189.24.789", "principalId": "tundern@gmail.com", "sessionId": "cd419cd2-fge7f-5671-98c0-87d8b1e035dd", "globalAccessId": "42ga93ea-x5a9-81c8-4a87-b32b9abc3fa2", "applicationType": "SAML", "applicationName": "Cali Baco localSafetys - WC02", "policyName": "Global Policy for STA" }, "details": { "type": "ACCESS_REQUEST", "state": "Accepted", "action": "auth", "credentials": [{ "type": "cut", "state": "Verified" } ] } }
Hi How can I exclude this time range from search 23:55 to 06:00 I'm using below spl but minutes required. index="my-index"  NOT (date_hour>=23 date_hour<6)    any idea? Thanks,
Hi, Below is the defualt color of my bar chart.The color is light so how do i change these default color. The result of my query is not specific ,the result keeps on changing. Also,Is there wa... See more...
Hi, Below is the defualt color of my bar chart.The color is light so how do i change these default color. The result of my query is not specific ,the result keeps on changing. Also,Is there way through which i can give a space between the legend values (my right most defined values in the chart) as they are overlapped one on another? Thank You
How do I get a list of all Windows event codes being ingested into Splunk please?
Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for ea... See more...
Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for each.     index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now |stats count by host     So this gives me a count of that specific event for each of my hosts.  I want to know if one (or more) of these drops to zero in the last 5 mins.  All the hostnames are known so can be written into the query. Not really got close with this one so some help would be appreciated.  Thanks!
We are currently running Splunk Enterprise, on-prem on a Linux VM and have a search head, with several forwarders. How can we maintain the search functionality for historical log data, but stop/disa... See more...
We are currently running Splunk Enterprise, on-prem on a Linux VM and have a search head, with several forwarders. How can we maintain the search functionality for historical log data, but stop/disable logging of any new data, either a blanket stop for all hosts/forwarders, or individually? I think this can be done in the .conf file somehow? Is there an easier way to do this? Our Splunk System Admin has left our company and I am trying to get up to speed on how this would be done. Thanks,  
Hi, I am trying to fine tune our license consumption and I can easily check the total number of events that match certain criteria (e.g: certain windows event ID for example).  but how could I check... See more...
Hi, I am trying to fine tune our license consumption and I can easily check the total number of events that match certain criteria (e.g: certain windows event ID for example).  but how could I check the license consume by them? in other words, the total size of the data set of a query. doing this, I could decide to blacklist certain events knowing beforehand that this blacklist will save X amount of MB a day of license. cheers, Jose
I have got table, which contains field SSS with search patterns and another field FFF, to which I want apply search patterns in order to get records with matches. Something like: SSS               ... See more...
I have got table, which contains field SSS with search patterns and another field FFF, to which I want apply search patterns in order to get records with matches. Something like: SSS                  FFF *Tomcat*       /opt/app/tomcat/ *jquery*          libxml2-2.9.1-6.el7.5.x86_64 *                         Package Installed Version Required Version python-perf *jquery*           jQuery Version Prior to 3.5.0 Can't figure out case insensitive solution, which will return the first, third and fourth record.
Hi I've installed Splunk App for Instrastructure into 8.1 Splunk Enterprise. I've deployed splunk connect for k8 which is successfully indexing logs + metrics (not objects).   When i try to open S... See more...
Hi I've installed Splunk App for Instrastructure into 8.1 Splunk Enterprise. I've deployed splunk connect for k8 which is successfully indexing logs + metrics (not objects).   When i try to open SAI i get: "Unable to retrieve migration status. Please refresh the page. If the issue persists, please contact Support."    The logs `index=_internal sourcetype="splunk_app_infrastructure"` - see below, suggest an SSL issue..any suggestions? Thanks       2021-09-21 11:58:23,235 - pid:9882 tid:MainThread ERROR em_group_metadata_manager:94 - Failed to execute group metadata manager modular input -- Error: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106) Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_group_metadata_manager.py", line 92, in do_execute self.update_group_membership() File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/logging_utils/instrument.py", line 69, in wrapper retval = f(decorated_self, *args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_group_metadata_manager.py", line 64, in update_group_membership all_groups = EMGroup.load(0, 0, '', 'asc') File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_model_group.py", line 253, in load query=kvstore_query File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/em_base_persistent_object.py", line 91, in load data_list = cls.storage_load(**params) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/rest_handler/session.py", line 73, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 61, in storage_load data = cls._paged_load(limit, skip, sort, fields, query_str) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 74, in _paged_load store = cls.store() File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/common_libs/storage_mixins/kvstore_mixin.py", line 36, in store store = svc.kvstore[cls.storage_name()] File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 1240, in __getitem__ response = self.get(key) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 1668, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/client.py", line 766, in get **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 686, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1194, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1252, in request response = self.handler(url, message, **kwargs) File "/opt/splunk/etc/apps/splunk_app_infrastructure/bin/external_lib/splunklib/binding.py", line 1392, in request connection.request(method, path, body, head) File "/opt/splunk/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/opt/splunk/lib/python3.7/http/client.py", line 972, in send self.connect() File "/opt/splunk/lib/python3.7/http/client.py", line 1447, in connect server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)      
Is it possible to have WinHostMon events formatted as XML?
Hi Knowledgeable People  I have been provided a query which works successfully when run by the DBA on the server itself.  However, when run using DB Connect, 'no results' are found.  Please can so... See more...
Hi Knowledgeable People  I have been provided a query which works successfully when run by the DBA on the server itself.  However, when run using DB Connect, 'no results' are found.  Please can someone advise why this may be?  Thanks This is the query -  SELECT ar.replica_server_name, adc.database_name, ag.name AS ag_name, drs.is_local, drs.is_primary_replica, drs.synchronization_state_desc, drs.is_commit_participant, drs.synchronization_health_desc, drs.recovery_lsn, drs.truncation_lsn, drs.last_sent_lsn, drs.last_sent_time, drs.last_received_lsn, drs.last_received_time, drs.last_hardened_lsn, drs.last_hardened_time, drs.last_redone_lsn, drs.last_redone_time, drs.log_send_queue_size, drs.log_send_rate, drs.redo_queue_size, drs.redo_rate, drs.filestream_send_rate, drs.end_of_log_lsn, drs.last_commit_lsn, drs.last_commit_time FROM master.sys.dm_hadr_database_replica_states AS drs INNER JOIN master.sys.availability_databases_cluster AS adc ON drs.group_id = adc.group_id AND drs.group_database_id = adc.group_database_id INNER JOIN master.sys.availability_groups AS ag ON ag.group_id = drs.group_id INNER JOIN master.sys.availability_replicas AS ar ON drs.group_id = ar.group_id AND drs.replica_id = ar.replica_id ORDER BY ag.name, ar.replica_server_name, adc.database_name;
When I configuring threat feeds in ES . In  Intelligence Downloads setting there is Maximum age  for threat intel downloads. Does anyone know what is Maximum age field.