All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Query to find when host is stopped,  Here as mentioned in picture, the field _time stopped at the time , when the host is stopped and it's back to normal, when host is started . So need to trig... See more...
Query to find when host is stopped,  Here as mentioned in picture, the field _time stopped at the time , when the host is stopped and it's back to normal, when host is started . So need to trigger alert when host is stopped.
Hi all, I configured a smartstore into 2 new splunk core infrastractures. i didnt' encounter error setting the indexer and multisite but when i configured the smartstore I started to receive these ... See more...
Hi all, I configured a smartstore into 2 new splunk core infrastractures. i didnt' encounter error setting the indexer and multisite but when i configured the smartstore I started to receive these errors multiple time: ERROR CacheManager [1721417 cachemanagerUploadExecutorWorker-3] - action=upload, cache_id="*THE*BUCKET*", status=failed, reason="HTTP Error 14: Retry policy exhausted in Read(): PerformWork() - CURL error [6]=Couldn't resolve host name [UNAVAILABLE]", elapsed_ms=881841 ERROR CacheManager [1721414 cachemanagerUploadExecutorWorker-0] - action=upload, cache_id="*THE*BUCKET*", status=failed, reason="HTTP Error 9: Permanent error in ComposeObject: {\n "error": {\n "code": 412,\n "message": "At least one of the pre-conditions you specified did not hold.",\n "errors": [\n {\n "message": "At least one of the pre-conditions you specified did not hold.",\n "domain": "global",\n "reason": "conditionNotMet",\n "locationType": "header",\n "location": "If-Match"\n }\n ]\n }\n}\n [FAILED_PRECONDITION]", elapsed_ms=327982 I checked the content of GCS folder with cmd: splunk cmd splunkd rfs ls index:my_index | grep *THE*BUCKET*IN*ERROR* I check the bucket and it's in the folder. I tried to restart CM and a rolling restart of indexer but the error persist.   I share the .conf: server.conf: [cachemanager] max_cache_size = 250000 hotlist_recency_secs = 604800 max_concurrent_downloads = 4 hotlist_bloom_filter_recency_hours = 168 indexes.conf: [volume:remote_store] storageType = remote path = gs://bucket remote.gs.credential_file=cred  
Hi, I have updated a React Component on one of my Splunk Apps. However, when I now relaunch Splunk, I see no change but rather an empty app. I have cleared my browser cache, my yarn cache an... See more...
Hi, I have updated a React Component on one of my Splunk Apps. However, when I now relaunch Splunk, I see no change but rather an empty app. I have cleared my browser cache, my yarn cache and my yarn cache .... but still no content on my browser now when launching the app. Can you please help, Thanks
Hello, Everyone. I want to Install Universal Forwarder in RHEL5 32bit version. Is it available for installation provide any link and It should be compactable with the latest version of Splunk main.... See more...
Hello, Everyone. I want to Install Universal Forwarder in RHEL5 32bit version. Is it available for installation provide any link and It should be compactable with the latest version of Splunk main. Any Idea for this thanks in advance.
Hi All, I was trying to find the unencrypted passwords in my logs by using one anchor pattern. After getting the password value by anchor pattern, I have to check whether it is encrypted or not. In m... See more...
Hi All, I was trying to find the unencrypted passwords in my logs by using one anchor pattern. After getting the password value by anchor pattern, I have to check whether it is encrypted or not. In my logs the encyption is done by using asterisk(*) symbol. So, It has been difficult for me to differentiate between user entered password and encrypted password as user password can also have the asterisk(*) symbol. There are no prior requirements for the passwords like atleast 1Uppercase, 1 lowercase etc.. The Password will have no min length. Passwords can be like: 1. 1234556687 2. RonnieAlex 3. Tyler@123 4. #%@cosmic123 5. A***B*V*****U**Y***(Encrypted password in my log)  Help me with the regex that matches all the above cases. Thx in advance
Hi All,  I am using transaction to group my DDOS appliance events based on a field called status which has values like starting, holding and end.       | transaction eventID startswith=star... See more...
Hi All,  I am using transaction to group my DDOS appliance events based on a field called status which has values like starting, holding and end.       | transaction eventID startswith=starting endswith=end maxspan=12h     Raw events:  Notice there is a status value in every event.     Jun 20 13:58:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:57:38+08:00;eventID=7861430818955774485;status=starting;dstip=10.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=3450,bps=39006800;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:59:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:58:07+08:00;eventID=7861430818955774485;status=holding;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 14:00:07 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:59:07+08:00;eventID=7861430818955774485;status=end;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound;     As you know, there is a duration field created that has the duration of the entire transaction start to end. Now, Is there a way calculate the duration between the status=holding and status=end also ?   Basically another transaction command in the same query but that will have startswith=holding and endswith=end ? Requirement is to find out how long was the attack in holding status. I am assuming adding another sub transaction may help to meet this. 
Hi Everyone, I have a field called as TriggeredMessage coming in an event in Splunk and I want to extract the short description field from it. Below is the sample triggeredmessage and it will be va... See more...
Hi Everyone, I have a field called as TriggeredMessage coming in an event in Splunk and I want to extract the short description field from it. Below is the sample triggeredmessage and it will be varying : Alert::Serious::Server::Memory Utilization is 92 %, which is above threshold 90% & less than 95%::memory.   And I want to extract a short description as below : Server::Memory Utilization is 92 %, which is above threshold 90% & less than 95% I am trying the regex but not working for me. Please help me on this.   Thanks.
Hi, I am using a multiselect input with the following query: |inputlookup ABC | eval hjk=_key | lookup XYZ asset OUTPUT ass AS name, app AS application | stats values( application) ... See more...
Hi, I am using a multiselect input with the following query: |inputlookup ABC | eval hjk=_key | lookup XYZ asset OUTPUT ass AS name, app AS application | stats values( application) However, when I add this onto the actual dashboard, no results are generated as expected on the actual input. What am I doing incorrectly? Thanks,
Hi, I am new to Splunk. I just started using it last month. For me the below  " | eval error=substr(msg, 0, 1000) |  table error app_name"    is not working  with my alert event. It doesn't work f... See more...
Hi, I am new to Splunk. I just started using it last month. For me the below  " | eval error=substr(msg, 0, 1000) |  table error app_name"    is not working  with my alert event. It doesn't work for large strings with 20k or more characters. The table cells show blank in this case. But values can be found in verbose mode but in fast mode. However it works when the msg is of ~1150 characters.
Hi, I have a dashboard as follows:   And the stakeholder wants a legend as follows added to the dashboard:   How can this be achieved? Thanks,
Hi,   I need to create a multiselect input using a static list of data. How can I do this? Thanks, Patrick
Hi, I am working on logs so the logs can be of just one line or multiple lines and if it is of one line I wanted to take whole log where as if the log has multiple lines I wanted to consider only t... See more...
Hi, I am working on logs so the logs can be of just one line or multiple lines and if it is of one line I wanted to take whole log where as if the log has multiple lines I wanted to consider only the first 2 lines. please tell a possible solution using regex. thanks in advance.
Hi Everyone, I have below sets of raw logs: source-timestamp=1655611288414,event-type:Lead_Staging__c source-timestamp=1655611288414,event-type:Account_Snapshot__c source-timestamp=1655611288414,... See more...
Hi Everyone, I have below sets of raw logs: source-timestamp=1655611288414,event-type:Lead_Staging__c source-timestamp=1655611288414,event-type:Account_Snapshot__c source-timestamp=1655611288414,event-type:Opp I want to fetch all the event -type. Can someone guide me.  
Hi All, We are facing a weird issue where we are unable to see any new incidents on PCI compliance >Incidents review. When we click the new incident it just returns no notable events found. No chan... See more...
Hi All, We are facing a weird issue where we are unable to see any new incidents on PCI compliance >Incidents review. When we click the new incident it just returns no notable events found. No changes were made on ES or Splunk. Has anybody faced this kind of issue?
splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_r... See more...
splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_rh.py\'."}]}' 2022-06-18 19:03:38,719 level=INFO pid=14431 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited." 2022-06-18 19:03:51,931 level=INFO pid=14435 tid=MainThread logger=splunksdc.collector pos=collector.py:run:267 | | message="Modular input started." 2022-06-18 19:03:51,940 level=ERROR pid=14435 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=utils.py:wrapper:77 | datainput="SqsBasedS3Input" start_time=1655579031 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/utils.py", line 75, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 857, in run proxy = ProxySettings.load(config) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunk_ta_aws/common/proxy.py", line 40, in load virtual=True, File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 38, in load content = self._cached_load(path) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 52, in _cached_load content = self._fresh_load(path) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 64, in _fresh_load elements = self._get(path, query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 68, in _get response = self._service.get(path, **query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 288, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 69, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 684, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 1197, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 1260, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_rh.py\'."}]}' 2022-06-18 19:03:51,942 level=INFO pid=14435 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited."   Earlier it was working fine, it's not working post i tried changing password.conf file, what should i do now? Reverting the password.conf is not working 
how do i setup my indexer with Splunk aws add on from config files, our indexers doesn't have web. UI.   The question is how do I update Secret Key and Access Key from the backend within the inde... See more...
how do i setup my indexer with Splunk aws add on from config files, our indexers doesn't have web. UI.   The question is how do I update Secret Key and Access Key from the backend within the indexers /opt/splunk/etc/slave-app/Splunk_TA_aws/?  is there any file which i need to update?
Hi  We are using Splunk ES 7.0 in our SOC environment. After upgrading to ES 7.0 we are getting the following issue during creating Notable Events.      Any idea ? 
Hi Everyone, There's a small problem I'm having while using the ltrim function. Query: | makeresults | eval username="dev_vishal" | eval trimName=ltrim(username,"dev_") | table username trimNa... See more...
Hi Everyone, There's a small problem I'm having while using the ltrim function. Query: | makeresults | eval username="dev_vishal" | eval trimName=ltrim(username,"dev_") | table username trimName Output: username = dev_vishal trimName = ishal What I really want is to trim the "dev_" out of "dev_vishal". I noticed that this works well with any other username which does not start with a "v". For example: Query: | makeresults | eval username="dev_sajal" | eval trimName=ltrim(username,"dev_") | table username trimName Output: username = dev_sajal trimName = sajal Request the Splunk community to please help me with this. Thanks, Sajal
I have two Searches and following are its result individually - index="myindex" <my search 1> | table App Size Count App  Size  Count App1 5GB   100 App2 100GB 10000 index=myindex" <my search... See more...
I have two Searches and following are its result individually - index="myindex" <my search 1> | table App Size Count App  Size  Count App1 5GB   100 App2 100GB 10000 index=myindex" <my search 2> | table App Size Count App  Size Count App3 15GB 1500 Now I want to run a report that shows result of all Apps (1 to 3) together. So, I used append. index="myindex" <my search 1> | table App Size Count | append [search index=myindex" <my search 2> | table App Size Count] | addcoltotals But when I used it, I didn't get the complete result. "Size" column is showing different value. Something like this - App  Size  Count App1 5GB   100 App2 100GB 10000 App3 7GB   720 What should be used to get the complete resultset as we got for each search. Best way to merge two query resultsets.   Thanks!
Hello -  I am using Dashboard Studio.  Within my dashboard, I have added a multi select input with the token name of "ms_clients". Within my dashboard, I have created a base search and chain to sup... See more...
Hello -  I am using Dashboard Studio.  Within my dashboard, I have added a multi select input with the token name of "ms_clients". Within my dashboard, I have created a base search and chain to support to populate this input. Within my source code, I have added the data source value for dataSources value of this search. I am not sure what I have done wrong, my mult select values will not populate.   Any suggestions? Filter Base Search     "ds_IxHRgRCU": { "type": "ds.chain", "options": { "extend": "ds_N4XDvyXB", "query": "| search unit_id = $dd_UnitId$\n| stats count by dest\n| fields - count" }, "name": "filter_clients_search" }     Multi Select Input       "input_xvZpOcK4": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": "*", "token": "ms_Clients", "dataSources": { "primary": "ds_IxHRgRCU" } }, "title": "Clients", "type": "input.multiselect" }  Thanks in advanced!