All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   I need to create a multiselect input using a static list of data. How can I do this? Thanks, Patrick
Hi, I am working on logs so the logs can be of just one line or multiple lines and if it is of one line I wanted to take whole log where as if the log has multiple lines I wanted to consider only t... See more...
Hi, I am working on logs so the logs can be of just one line or multiple lines and if it is of one line I wanted to take whole log where as if the log has multiple lines I wanted to consider only the first 2 lines. please tell a possible solution using regex. thanks in advance.
Hi Everyone, I have below sets of raw logs: source-timestamp=1655611288414,event-type:Lead_Staging__c source-timestamp=1655611288414,event-type:Account_Snapshot__c source-timestamp=1655611288414,... See more...
Hi Everyone, I have below sets of raw logs: source-timestamp=1655611288414,event-type:Lead_Staging__c source-timestamp=1655611288414,event-type:Account_Snapshot__c source-timestamp=1655611288414,event-type:Opp I want to fetch all the event -type. Can someone guide me.  
Hi All, We are facing a weird issue where we are unable to see any new incidents on PCI compliance >Incidents review. When we click the new incident it just returns no notable events found. No chan... See more...
Hi All, We are facing a weird issue where we are unable to see any new incidents on PCI compliance >Incidents review. When we click the new incident it just returns no notable events found. No changes were made on ES or Splunk. Has anybody faced this kind of issue?
splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_r... See more...
splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_rh.py\'."}]}' 2022-06-18 19:03:38,719 level=INFO pid=14431 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited." 2022-06-18 19:03:51,931 level=INFO pid=14435 tid=MainThread logger=splunksdc.collector pos=collector.py:run:267 | | message="Modular input started." 2022-06-18 19:03:51,940 level=ERROR pid=14435 tid=MainThread logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=utils.py:wrapper:77 | datainput="SqsBasedS3Input" start_time=1655579031 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/utils.py", line 75, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 857, in run proxy = ProxySettings.load(config) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunk_ta_aws/common/proxy.py", line 40, in load virtual=True, File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 38, in load content = self._cached_load(path) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 52, in _cached_load content = self._fresh_load(path) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 64, in _fresh_load elements = self._get(path, query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/bin/splunksdc/config.py", line 68, in _get response = self._service.get(path, **query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 288, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 69, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 684, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 1197, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/slave-apps/Splunk_TA_aws/lib/splunklib/binding.py", line 1260, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Cannot call handler \'splunk_ta_aws_settings_proxy\' due to missing script \'aws_proxy_settings_rh.py\'."}]}' 2022-06-18 19:03:51,942 level=INFO pid=14435 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited."   Earlier it was working fine, it's not working post i tried changing password.conf file, what should i do now? Reverting the password.conf is not working 
how do i setup my indexer with Splunk aws add on from config files, our indexers doesn't have web. UI.   The question is how do I update Secret Key and Access Key from the backend within the inde... See more...
how do i setup my indexer with Splunk aws add on from config files, our indexers doesn't have web. UI.   The question is how do I update Secret Key and Access Key from the backend within the indexers /opt/splunk/etc/slave-app/Splunk_TA_aws/?  is there any file which i need to update?
Hi  We are using Splunk ES 7.0 in our SOC environment. After upgrading to ES 7.0 we are getting the following issue during creating Notable Events.      Any idea ? 
Hi Everyone, There's a small problem I'm having while using the ltrim function. Query: | makeresults | eval username="dev_vishal" | eval trimName=ltrim(username,"dev_") | table username trimNa... See more...
Hi Everyone, There's a small problem I'm having while using the ltrim function. Query: | makeresults | eval username="dev_vishal" | eval trimName=ltrim(username,"dev_") | table username trimName Output: username = dev_vishal trimName = ishal What I really want is to trim the "dev_" out of "dev_vishal". I noticed that this works well with any other username which does not start with a "v". For example: Query: | makeresults | eval username="dev_sajal" | eval trimName=ltrim(username,"dev_") | table username trimName Output: username = dev_sajal trimName = sajal Request the Splunk community to please help me with this. Thanks, Sajal
I have two Searches and following are its result individually - index="myindex" <my search 1> | table App Size Count App  Size  Count App1 5GB   100 App2 100GB 10000 index=myindex" <my search... See more...
I have two Searches and following are its result individually - index="myindex" <my search 1> | table App Size Count App  Size  Count App1 5GB   100 App2 100GB 10000 index=myindex" <my search 2> | table App Size Count App  Size Count App3 15GB 1500 Now I want to run a report that shows result of all Apps (1 to 3) together. So, I used append. index="myindex" <my search 1> | table App Size Count | append [search index=myindex" <my search 2> | table App Size Count] | addcoltotals But when I used it, I didn't get the complete result. "Size" column is showing different value. Something like this - App  Size  Count App1 5GB   100 App2 100GB 10000 App3 7GB   720 What should be used to get the complete resultset as we got for each search. Best way to merge two query resultsets.   Thanks!
Hello -  I am using Dashboard Studio.  Within my dashboard, I have added a multi select input with the token name of "ms_clients". Within my dashboard, I have created a base search and chain to sup... See more...
Hello -  I am using Dashboard Studio.  Within my dashboard, I have added a multi select input with the token name of "ms_clients". Within my dashboard, I have created a base search and chain to support to populate this input. Within my source code, I have added the data source value for dataSources value of this search. I am not sure what I have done wrong, my mult select values will not populate.   Any suggestions? Filter Base Search     "ds_IxHRgRCU": { "type": "ds.chain", "options": { "extend": "ds_N4XDvyXB", "query": "| search unit_id = $dd_UnitId$\n| stats count by dest\n| fields - count" }, "name": "filter_clients_search" }     Multi Select Input       "input_xvZpOcK4": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": "*", "token": "ms_Clients", "dataSources": { "primary": "ds_IxHRgRCU" } }, "title": "Clients", "type": "input.multiselect" }  Thanks in advanced!
Please see this search - i'm trying to add missing field values from another index to this search.   index=1 earliest=-9d latest=now ExternalApiType=Event_DetectionSummaryEvent | fillnull | stats... See more...
Please see this search - i'm trying to add missing field values from another index to this search.   index=1 earliest=-9d latest=now ExternalApiType=Event_DetectionSummaryEvent | fillnull | stats values(ComputerName) AS ComputerName values(DetectName) AS DetectName values(UserName) AS User values(event_platform) AS Platform values(FileVersion) AS SensorVersion P values(MachineDn) AS OU values(SiteName) AS SiteName count(_time) AS count BY _time EventUUID | sort 0 - _time | eval Time=strftime(_time, "%m/%d/%Y %H:%M:%S") | appendcols [ search earliest=-9d latest=now index=json "AuditKeyValues{}.Key"=new_state "AuditKeyValues{}.ValueString"=* | spath | spath AuditKeyValue{} ]   Index=1 has fields ComputerName, DetectName, UserName, _time, EventUUID index=main has fields event_platform, FileVersion, MachineDn, SiteName   I want to pull the fields from index=main into the stats command of the index=1. I thought  it's as simple as adding the index=main at the beginning of the search with an OR: (index=json ExternalApiType=Event_DetectionSummaryEvent) OR (index=main FileVersion=*). But it's not working. I have to have the ExternalApiType value and it's only in the first index. I also tried join with the subsearch, but it didn't work. The original search is for 90 days, so I shouldn't use a subsearch anyways. Thank you.
My requirements consists of lookup file, it consists of list of hosts, as it is the saved results of an alert, so the list of host is the list of server down list. So by using the lookup file to ma... See more...
My requirements consists of lookup file, it consists of list of hosts, as it is the saved results of an alert, so the list of host is the list of server down list. So by using the lookup file to make alert to  run for every minute, it should notify when the host in lookup is back to normal, The problem that i'm having is, once the host is back to normal, then the same host should not be considered further. Only should check with remaining hosts. lookup file that stores list of server down - hostdown.csv Query to find list of down servers | search index=linux sourcetype=df | where ((PercentUsedSpace >= 80) AND (PercentUsedSpace<=90))
We are running Splunk enterprise 8.2.4 and it has been working fine with SSO authentication until I updated the SSL certificate, the certificate that was updated is the one referenced in my web.conf ... See more...
We are running Splunk enterprise 8.2.4 and it has been working fine with SSO authentication until I updated the SSL certificate, the certificate that was updated is the one referenced in my web.conf and my web browser show the new certificate however it broke SSO Please note the updated certificate is also used in authentication.conf by Saml (ClientCert)  Error message below are seen on splunk _internal logs ERROR UiSAML [66314 webui] - IDP failed to authenticate request. Status Message="" Status Code="Responder" ERROR Saml [66314 webui] - Failed to parse issuer. Could not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusMessage or no matching nodes found. No value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusMessageCould not evaluate xpath expression /samlp:Response/samlp:Status/samlp:StatusDetail/Cause or no matching nodes found. No value found in SamlResponse for key=/samlp:Response/samlp:Status/samlp:StatusDetail/CauseCould not evaluate xpath expression //saml:Assertion/saml:Issuer or no matching nodes found. No value found in SamlResponse for key=//saml:Assertion/saml:Issuer How can I fix the problem please?
I would like to know about to add a single field value to outputlookup, as currently there are some fields like id, condition, value is there , but the need is only to ingest condition, Can anyone pr... See more...
I would like to know about to add a single field value to outputlookup, as currently there are some fields like id, condition, value is there , but the need is only to ingest condition, Can anyone provide the query for this.
Hello Upgraded Splunk Enterprise to 9.0.0 today - went OK. Upgraded Splunk Universal Forwarders on Windows Server 2019 to 9.0.0 - upgrade says all went OK. I opened cmd and executed splunk rest... See more...
Hello Upgraded Splunk Enterprise to 9.0.0 today - went OK. Upgraded Splunk Universal Forwarders on Windows Server 2019 to 9.0.0 - upgrade says all went OK. I opened cmd and executed splunk restart The SplunkForwarder restarts OK, but I get the following error:     Invalid key in stanza [webhook] in D:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf, line 229: enable_allowlist (value: false)     In the file alert_actions.conf on line 229:     [webhook] enable_allowlist = false       Anyone know why I'm seeing this after the upgrade? Thanks
I want to trigger a Splunk SOAR playbook to iterate through a list of hosts every hour and check if they are online in our EDR tool, and if they are online to display a message to the user via the ED... See more...
I want to trigger a Splunk SOAR playbook to iterate through a list of hosts every hour and check if they are online in our EDR tool, and if they are online to display a message to the user via the EDR API. Although the playbook is already complete, I can't think of a good way to have it execute every hour. I thought about using a Splunk app ingestion to query our Splunk instance every 60 minutes to create a dummy label and container that the playbook could be set to "active" on, but that seems like an awkward work around.    Is there some other app or setting I'm missing that could achieve this goal?  
...
Hi all, I'm working on a deploy with Universal Forwader, Heavy Forwarder and Indexer Cluster and Search Cluster. The problem is this: I'm indexing data from different csv since long time. For the ... See more...
Hi all, I'm working on a deploy with Universal Forwader, Heavy Forwarder and Indexer Cluster and Search Cluster. The problem is this: I'm indexing data from different csv since long time. For the first time yesterday I realized that not all the raw of my csv files are indexed at all.  For example: In a csv I count 24k rows and when I perform a stats count on the index I see only 16/17k rows. Each file rotates every minutes.  In the log there's anything that leads to an error. In the UNIVERSAL FORWARDE I've this in inputs.conf [batch:///var/opt/OV/shared/perfSpi/datafiles/metric/final/F5_ResurcesGroup*] disabled = 0 index = f5_metrics sourcetype = f5_metrics initCrcLength = 100000 move_policy = sinkhole In the HEAVY FORWARDER: props.conf [f5_metrics] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER=1 HEADER_FIELD_DELIMITER =, FIELD_DELIMITER=, HEADER_FIELD_LINE_NUMBER = 0 SEDCMD-dropheader = s/^"Node.+//g SEDCMD-select_fields = s/([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*)/\1,\2,\4,\5,\9,\17,\18/g #SEDCMD-select_fields = s/([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*)/\1,\4,\5,\9,\17,\18/g TRANSFORMS-f5_fields_name_extract=f5_fields_name_extract and in the transform.conf  [f5_fields_name_extract] REGEX=([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*),([^,]*) FORMAT=NodeID::$1 TimeStamp::$2 period_length::$3 ltmVirtualServStatClientCurConns::$4 ltmVirtualServStatVsUsageRatio1m::$5 DisplayAttribute::$6 PollingInterval::$7 #FORMAT=NodeID::$1 period_length::$2 ltmVirtualServStatClientCurConns::$3 ltmVirtualServStatVsUsageRatio1m::$4 DisplayAttribute::$5 PollingInterval::$6 WRITE_META = true Any suggestion ? Thanks Fabrizio      
Hi everyone, i need help to understand why i'm wrong and how to fix the problem. I've a lookup table in which is stored the last four year of data. That have a seasonality of a month and i want to ... See more...
Hi everyone, i need help to understand why i'm wrong and how to fix the problem. I've a lookup table in which is stored the last four year of data. That have a seasonality of a month and i want to predict the next year.   I use the predict command with the LLP algorithm to estimante the values. Below i show you my query and the output: As you can see, the prediction doesn't work because it simply put the last two values in ciclic way. here the associated table: could you help me to understand where i'm wrong with query or data?   i did the same work months ago with different output, more realistic: Thaks a lot!
I have a SHC consisting of 4 SHs (Splunk on-prem on AWS). One or the other SHs seem to go into down state. The only info I can find is "Failed system status check" of the EC2 instance and splunk in s... See more...
I have a SHC consisting of 4 SHs (Splunk on-prem on AWS). One or the other SHs seem to go into down state. The only info I can find is "Failed system status check" of the EC2 instance and splunk in stopped state. The splunkd.log amd health.log seems to be fine too. Any suggestions which could solve the issue?