All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a... See more...
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a better name for the input. I cloned the original, put a different name, and kept all the same config. I disabled the original. I noticed that I can still run the script and see the API output, but when I searched for the output, I did not find it. I started to see 401 errors instead. I went back to the data inputs and disabled the clone and enabled the original and all is back to normal. Is there a rule to cloning the data input for the addon builder that says not to clone?      
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/... See more...
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/apps/TA-api1/logs/vosfin_cli.log"   The main page (Configure Data Collection) shows all the 'input names' that I built. But looking at the 'event count', I see a 0.  When I go into the log, it shows it running and giving me data ok.  Why doesn't the event count up every time the script runs?    Is there addition configuration in inputs, props or web.conf that I need to add/edit to make it count up?    
Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to sen... See more...
Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to send the log details to my email id. However emails are landing in junk folder. Not able to figure out why it is landing in junk folder.Any help is appreciated
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check In... See more...
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check Indexing Performance: Go to Indexing -> Indexing Performance  Review ingestion rate trends. Identify queue buildup (parsing, indexing, or pipeline queues). 3. Monitor data inputs Go to Forwarders > Forwarders deployment  Check forwarder connectivity and status. Confirm data forwarding from Universal Forwarders or Heavy Forwarders. what other steps can be included in this 
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an o... See more...
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an option to enable authentication for the webhook, but after enabling it, I'm unsure what type of information needs to be included in the request. I've tried using basic authentication and an auth token, but neither worked. Could someone please guide me on what information should be included in the request once authentication is enabled?
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index... See more...
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index=* | eval device = coalesce( dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | stats count by device, index, _raw  
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using ... See more...
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using is "https://<my-instance>.splunkcloud.com:<port>/services/collector"  The request format is given below: curl -k https://<my-instance>.splunkcloud.com:<port>/services/collector -H "Authorization: Splunk <cloud-instance-token>"  -H "Content-Type: application/json" -d '{ "event": {payload} }'  
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send ra... See more...
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send raw data - I cannot filter only the data I want So sending all data is OK for me. Currently, my UF have this app called INDEXER_OUTPUT. Which in its default/outputs.conf have these configs:   [tcpout] defaultGroup=my_indexer_cluster autoLBFrequency=300 [tcpout:my_indexer_cluster] server=<indexer_01_ip>:9997,<indexer_02_ip>:9997,<indexer_03_ip>:9997,<indexer_04_ip>:9997 [tcpout-server://<indexer_01_ip>:9997] [tcpout-server://<indexer_02_ip>:9997] [tcpout-server://<indexer_03_ip>:9997] [tcpout-server://<indexer_04_ip>:9997]   So what I did was created another server class, with a single app within called ELK_OUTPUT. It also has a single default/outputs.conf file with this config:   [tcpout] [tcpout:elk_server] server=<elk_server_ip>:3514 sendCookedData=false   Upon adding the client to the server class, what I noticed is a weird behavior: I only get the metrics.log sent to the ELK server What I am suspecting is that maybe because my [WinEventLog://Security] input stanza contains "renderXML = true" and "evt_resolve_ad_obj = 1", so that it no longer considered as "raw data"?
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code ... See more...
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code 1603.  https://download.splunk.com/products/universalforwarder/releases/10.0.0/windows/splunkforwarder-10.0.0-ea5bfadeac3a-windows-x86.msi Please note the following observations: Fresh installation of version 10.0.0.0 completes successfully. Upgrade from version 9.4.2.0 to 9.4.3.0 works without any issues. The upgrade was attempted both via UI and using silent switches, but the result was the same. Unfortunately, we were unable to attach the log file for reference. And actions are rolled back. Could you please assist us in identifying and resolving the root cause of this issue?
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 123456... See more...
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 1234567890,7456072,Contract4,15 1234567891,7456072,Contract5,16 tmp1_2.csv WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19   The primary key between these tables is WorkplaceId,Contract=WorkplaceId,ContractId The task is always to select the content from tmp1_2.csv based on conditions cond1: select everything from tmp1_2.csv where WorkplaceId,Contract!=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 2. cond2: select everything from tmp1_2.csv where WorkplaceId,Contract=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19 Any support, please?
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV ... See more...
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV Store Version: MongoDB 7.x (per documentation) Issue Summary: Experiencing KV Store upgrade failures when upgrading Splunk Enterprise Indexer from 9.3.2 to any 9.4.x version. According to Splunk documentation, the upgrade from 9.3.x to 9.4.x should be seamless with automatic KV Store upgrade from MongoDB 4.x to 7.x. Both automatic and manual KV Store upgrade approaches have failed. Errors sample: -  alled Result::unwrap() on an Err value: UpgradeError { details: "Error updating status to 'INITIAL_UPGRADE_SEQUENCE' on 127.0.0.1:8191 document: Error { kind: Write(WriteError(WriteError { code: 11000, code_name: None, message: "E11000 duplicate key error collection: migration_metadata.migration_metadata index: id dup key: { _id: \"127.0.0.1:8191\" }", details: None })), labels: {}, wire_version: None, source: None }", kind: LocalError } - Failed to upgrade KV Store to the latest version. KV Store is running an old version, 4.2. Resolve upgrade errors and try to upgrade KV Store to the latest version again. Any others wiredTiger etc all might be wild fire and relevant Tried manuallay and also ansible automation both( same steps )   Question: Why is KV Store upgrading to 4.25 instead of directly to 7.x as documented? How to come out as we have big infra and we need to upgrade etc ?
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searchin... See more...
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searching, we cannot find either of these events. When reviewing the log files (EVTX), Event ID 1104 appears as the final entry in the archived log, while Event ID 1105 is the initial entry in the newly created EVTX file. Here is the configuration for log archiving:
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention peri... See more...
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention period of 1 year for all the data. After checking the configuration, looks like I have # of Hot buckets set to auto (which is 3 by default, I assume) but I don't find any Warm buckets. So, everything is in Hot buckets. I am looking at few settings maxHotSpanSecs, frozenTimePeriodInSecs and maxVolumeDataSizeMB, that should roll data to warm and then cold buckets eventually.  Under /opt/splunk/etc/system/local/indexes.conf maxHotSpanSecs is set to 7776000 frozenTimePeriodInSecs 31536000 maxVolumeDataSizeMB (not set) Under /opt/splunk/etc/apps/search/indexs.conf maxHotSpanSecs not set frozenTimePeriodInSecs 31536000 (for all the indexes) maxVolumeDataSizeMB (not set) Shouldn't frozenTimePeriodInSecs take precedent? Maybe, my maxVolumeDataSizeMB is set to too high. Do I need to change it? How do frozenTimePeriodInSecs and maxVolumeDataSizeMB affect each other? I thought frozenTimePeriodInSecs would override maxVolumeDataSizeMB
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections fo... See more...
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections for these two vendors? I do have all the TA apps installed and connectors for both vendors.  Just not seeing any event based detections that have already been setup. 
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local... See more...
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local search head → remote provider). Is it possible to configure it for true bidirectional searches in a single architecture (create two separate unidirectional configurations (A→B and B→A))? Has anyone implemented this setup successfully? Any best practices or caveats would be appreciated. Also, have anyone implemented this along with ITSI - what are the takeaways and do & don'ts?
Team, do you know where I can find information about certifications like ISO 27001 that apply to our agents as Hotel Collector (Splunk Distribution) UF, HF?
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical even... See more...
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical event completion investigation per month for Critical and High notable events in Splunk ES.  I have this query which gives me the numerator and denominator for the events, but does not break it out by Urgency/Severity:  | inputlookup incident_review_workflow_audit | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution | sort - DaysToResolution      Event ID Event Opened Triage process started Event Resolved DaysInNewStatus DaysToResolution 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@e90ff7db7d8ff92bbe8aa4566c1bab37 2025-07-05 02:02:13 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48 7C412294-C46A-448A-8170-466CE301D56A@@notable@@0feff824336394dbe4dcbedcbf980238 2025-07-05 02:02:08 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48   This query does give me the Urgency for events, but does not give me time to resolution: `notable` | search (urgency=critical) | eval startTime=strftime (_time, "%Y-%m-%d %H:%M:%S") | table startTime, rule_id source comment urgency reviewer status_description owner_realname status_label     startTime rule_id source comment urgency reviewer status_description owner_realname  status_label 2025-07-29 09:30:16 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@5ebbdf0e0821b477785b018e29d44973 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 09:30:12 AD72F249-8457-4D5E-9557-9621E2F5D3FF@@notable@@3043a1f3a2fbc3f92f67800a066ada66 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 07:15:18 7C412294-C46A-448A-8170-466CE301D56A@@notable@@54a0ffabacbf083cb7f2e370937fc2bf Endpoint - ADFS Smart Lockout Events - Rule The event has been triaged critical abcde00 Initial analysis of threat John Doe Triage Trying to combine them to get time to resolution plus urgency (so I can filter on urgency) has been a complete mess. If I do manage to combine them by trimming around the Event ID / rule_id, it doesn't give me the expected number or half the time it is missing the urgency.  Is there something I am missing, or is this even possible? Thanks in advance. 
Hi, can anybody help to create dott chart? x-axis: _time y-axis: points of values of fields: tmp, min_w, max_w Here is the input table:   Here is the wished chart:  
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [... See more...
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [authentication] authType = LDAP authSettings = uk_ldap_auth [uk_ldap_auth] SSLEnabled = 1 bindDN = CN=Infodir-HBEU-INFSLK,OU=Service Accounts,DC=InfoDir,DC=Prod,DC=FED groupBaseDN = OU=Splunk Network Log Analysis UK,OU=Applications,OU=Groups,DC=Infodir,DC=Prod,DC=FED groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = aa-lds-prod.uk.fed port = 3269 userBaseDN = ou=HSBCPeople,dc=InfoDir,dc=Prod,dc=FED userNameAttribute = employeeid realNameAttribute = displayname emailAttribute = mail [roleMap_uk_ldap_auth] <roles mapped with AD group created> Checked this post - https://community.splunk.com/t5/Security/How-can-I-generate-a-list-of-users-and-assigned-roles/m-p/194811 and try to give the same command -  |rest /services/authentication/users splunk_server=local |fields title roles realname |rename title as userName|rename realname as Name Given this in SH search, but hardly returning only 5 results but we have nearly 100 roles created. Even given splunk_server=*, still the same result. I am having admin role as well and I hope I have the needed capabilities. Not sure what am I missing here? Any thoughts?  
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's bu... See more...
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's built already for splunk and move way ahead? Happy to collab!