All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using ... See more...
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using is "https://<my-instance>.splunkcloud.com:<port>/services/collector"  The request format is given below: curl -k https://<my-instance>.splunkcloud.com:<port>/services/collector -H "Authorization: Splunk <cloud-instance-token>"  -H "Content-Type: application/json" -d '{ "event": {payload} }'  
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send ra... See more...
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send raw data - I cannot filter only the data I want So sending all data is OK for me. Currently, my UF have this app called INDEXER_OUTPUT. Which in its default/outputs.conf have these configs:   [tcpout] defaultGroup=my_indexer_cluster autoLBFrequency=300 [tcpout:my_indexer_cluster] server=<indexer_01_ip>:9997,<indexer_02_ip>:9997,<indexer_03_ip>:9997,<indexer_04_ip>:9997 [tcpout-server://<indexer_01_ip>:9997] [tcpout-server://<indexer_02_ip>:9997] [tcpout-server://<indexer_03_ip>:9997] [tcpout-server://<indexer_04_ip>:9997]   So what I did was created another server class, with a single app within called ELK_OUTPUT. It also has a single default/outputs.conf file with this config:   [tcpout] [tcpout:elk_server] server=<elk_server_ip>:3514 sendCookedData=false   Upon adding the client to the server class, what I noticed is a weird behavior: I only get the metrics.log sent to the ELK server What I am suspecting is that maybe because my [WinEventLog://Security] input stanza contains "renderXML = true" and "evt_resolve_ad_obj = 1", so that it no longer considered as "raw data"?
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code ... See more...
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code 1603.  https://download.splunk.com/products/universalforwarder/releases/10.0.0/windows/splunkforwarder-10.0.0-ea5bfadeac3a-windows-x86.msi Please note the following observations: Fresh installation of version 10.0.0.0 completes successfully. Upgrade from version 9.4.2.0 to 9.4.3.0 works without any issues. The upgrade was attempted both via UI and using silent switches, but the result was the same. Unfortunately, we were unable to attach the log file for reference. And actions are rolled back. Could you please assist us in identifying and resolving the root cause of this issue?
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 123456... See more...
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 1234567890,7456072,Contract4,15 1234567891,7456072,Contract5,16 tmp1_2.csv WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19   The primary key between these tables is WorkplaceId,Contract=WorkplaceId,ContractId The task is always to select the content from tmp1_2.csv based on conditions cond1: select everything from tmp1_2.csv where WorkplaceId,Contract!=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 2. cond2: select everything from tmp1_2.csv where WorkplaceId,Contract=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19 Any support, please?
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV ... See more...
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV Store Version: MongoDB 7.x (per documentation) Issue Summary: Experiencing KV Store upgrade failures when upgrading Splunk Enterprise Indexer from 9.3.2 to any 9.4.x version. According to Splunk documentation, the upgrade from 9.3.x to 9.4.x should be seamless with automatic KV Store upgrade from MongoDB 4.x to 7.x. Both automatic and manual KV Store upgrade approaches have failed. Errors sample: -  alled Result::unwrap() on an Err value: UpgradeError { details: "Error updating status to 'INITIAL_UPGRADE_SEQUENCE' on 127.0.0.1:8191 document: Error { kind: Write(WriteError(WriteError { code: 11000, code_name: None, message: "E11000 duplicate key error collection: migration_metadata.migration_metadata index: id dup key: { _id: \"127.0.0.1:8191\" }", details: None })), labels: {}, wire_version: None, source: None }", kind: LocalError } - Failed to upgrade KV Store to the latest version. KV Store is running an old version, 4.2. Resolve upgrade errors and try to upgrade KV Store to the latest version again. Any others wiredTiger etc all might be wild fire and relevant Tried manuallay and also ansible automation both( same steps )   Question: Why is KV Store upgrading to 4.25 instead of directly to 7.x as documented? How to come out as we have big infra and we need to upgrade etc ?
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searchin... See more...
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searching, we cannot find either of these events. When reviewing the log files (EVTX), Event ID 1104 appears as the final entry in the archived log, while Event ID 1105 is the initial entry in the newly created EVTX file. Here is the configuration for log archiving:
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention peri... See more...
I have this small Splunk Enterprise deployment in a lab that's air gapped. So I setup this deployment about 18 months ago. Recently I noticed, I am not rolling any data. I want to set retention period of 1 year for all the data. After checking the configuration, looks like I have # of Hot buckets set to auto (which is 3 by default, I assume) but I don't find any Warm buckets. So, everything is in Hot buckets. I am looking at few settings maxHotSpanSecs, frozenTimePeriodInSecs and maxVolumeDataSizeMB, that should roll data to warm and then cold buckets eventually.  Under /opt/splunk/etc/system/local/indexes.conf maxHotSpanSecs is set to 7776000 frozenTimePeriodInSecs 31536000 maxVolumeDataSizeMB (not set) Under /opt/splunk/etc/apps/search/indexs.conf maxHotSpanSecs not set frozenTimePeriodInSecs 31536000 (for all the indexes) maxVolumeDataSizeMB (not set) Shouldn't frozenTimePeriodInSecs take precedent? Maybe, my maxVolumeDataSizeMB is set to too high. Do I need to change it? How do frozenTimePeriodInSecs and maxVolumeDataSizeMB affect each other? I thought frozenTimePeriodInSecs would override maxVolumeDataSizeMB
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections fo... See more...
Hello,     I see there are lots of Cisco event based detections and not many palo alto or checkpoint (fw, ids/ips, threats) events.    Is everyone just creating their own event based detections for these two vendors? I do have all the TA apps installed and connectors for both vendors.  Just not seeing any event based detections that have already been setup. 
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local... See more...
I want to configure Federated Search so that Deployment A can search Deployment B, and Deployment B can also search Deployment A. I understand that Federated Search is typically unidirectional (local search head → remote provider). Is it possible to configure it for true bidirectional searches in a single architecture (create two separate unidirectional configurations (A→B and B→A))? Has anyone implemented this setup successfully? Any best practices or caveats would be appreciated. Also, have anyone implemented this along with ITSI - what are the takeaways and do & don'ts?
Team, do you know where I can find information about certifications like ISO 27001 that apply to our agents as Hotel Collector (Splunk Distribution) UF, HF?
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical even... See more...
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical event completion investigation per month for Critical and High notable events in Splunk ES.  I have this query which gives me the numerator and denominator for the events, but does not break it out by Urgency/Severity:  | inputlookup incident_review_workflow_audit | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution | sort - DaysToResolution      Event ID Event Opened Triage process started Event Resolved DaysInNewStatus DaysToResolution 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@e90ff7db7d8ff92bbe8aa4566c1bab37 2025-07-05 02:02:13 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48 7C412294-C46A-448A-8170-466CE301D56A@@notable@@0feff824336394dbe4dcbedcbf980238 2025-07-05 02:02:08 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48   This query does give me the Urgency for events, but does not give me time to resolution: `notable` | search (urgency=critical) | eval startTime=strftime (_time, "%Y-%m-%d %H:%M:%S") | table startTime, rule_id source comment urgency reviewer status_description owner_realname status_label     startTime rule_id source comment urgency reviewer status_description owner_realname  status_label 2025-07-29 09:30:16 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@5ebbdf0e0821b477785b018e29d44973 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 09:30:12 AD72F249-8457-4D5E-9557-9621E2F5D3FF@@notable@@3043a1f3a2fbc3f92f67800a066ada66 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 07:15:18 7C412294-C46A-448A-8170-466CE301D56A@@notable@@54a0ffabacbf083cb7f2e370937fc2bf Endpoint - ADFS Smart Lockout Events - Rule The event has been triaged critical abcde00 Initial analysis of threat John Doe Triage Trying to combine them to get time to resolution plus urgency (so I can filter on urgency) has been a complete mess. If I do manage to combine them by trimming around the Event ID / rule_id, it doesn't give me the expected number or half the time it is missing the urgency.  Is there something I am missing, or is this even possible? Thanks in advance. 
Hi, can anybody help to create dott chart? x-axis: _time y-axis: points of values of fields: tmp, min_w, max_w Here is the input table:   Here is the wished chart:  
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [... See more...
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [authentication] authType = LDAP authSettings = uk_ldap_auth [uk_ldap_auth] SSLEnabled = 1 bindDN = CN=Infodir-HBEU-INFSLK,OU=Service Accounts,DC=InfoDir,DC=Prod,DC=FED groupBaseDN = OU=Splunk Network Log Analysis UK,OU=Applications,OU=Groups,DC=Infodir,DC=Prod,DC=FED groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = aa-lds-prod.uk.fed port = 3269 userBaseDN = ou=HSBCPeople,dc=InfoDir,dc=Prod,dc=FED userNameAttribute = employeeid realNameAttribute = displayname emailAttribute = mail [roleMap_uk_ldap_auth] <roles mapped with AD group created> Checked this post - https://community.splunk.com/t5/Security/How-can-I-generate-a-list-of-users-and-assigned-roles/m-p/194811 and try to give the same command -  |rest /services/authentication/users splunk_server=local |fields title roles realname |rename title as userName|rename realname as Name Given this in SH search, but hardly returning only 5 results but we have nearly 100 roles created. Even given splunk_server=*, still the same result. I am having admin role as well and I hope I have the needed capabilities. Not sure what am I missing here? Any thoughts?  
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's bu... See more...
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's built already for splunk and move way ahead? Happy to collab!
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftra... See more...
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftrag"> <label>Auftrag</label> <fieldForLabel>Auftrag</fieldForLabel> <fieldForValue>Auftrag</fieldForValue> <search> <query>xxxxx</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input>  
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup ... See more...
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup tasks pending goes from around 5xx to 102 (after deleting rb bucket. I assume its the issue of bucket syncing in indexer cluster because client's server is a bit laggy(network delay, low cpu)) There are 40 fixup tasks in progress and 102 fixup tasks pending in the indexer cluster master. The internal log shows that all those 40 tasks are displaying the following error: Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk Delete dir exists, or failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx; will build bucket locally. err= Failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx from srcs=xxxxxxxxxxxxxxxxxxxxxxx CMSlave [6205 CallbackRunnerThread] - searchState transition bid=xxxxxxxxxxxxxxxxxxxxx from=PendingSearchable to=Unsearchable reason='fsck failed: exitCode=24 (procId=1717942)' Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk The internal log shows that all those 102 tasks are displaying the following error: ERROR TcpInputProc [6291 ReplicationDataReceiverThread] - event=replicationData status=failed err="Could not open file for bid=windows~xxxxxx err="bucket is already registered with this peer" (Success)"  Does anyone know what "fsck failed exit code 24" and "bucket is already registered with this peer" mean? How can these issues be resolved to reduce the number of fixup tasks? Thanks.  
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the stateme... See more...
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the statement AmbientCapabilities=CAP_DAC_READ_SEARCH located in the /etc/systemd/system/SplunkForwarder.service file.  However, the same is not true on the Splunk instance.  They require read access permissions via a file ACL or something.  Where these options all result in multiple STIG compliance findings.  Which each require write ups a vendor (Splunk) dependencies.   Question - why?  Why can't Splunk access the audit.log files the same way as the UF?  Or is there some way to do the same sort of thing with AmbientCapabilities for Splunkd.Service? It is tempting to quit collecting these logs with Splunk itself and install UF on the Splunk instances too.
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  ... See more...
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  | eval search_col = if($searchcriteria$ == "s_user", user, path) | eval search_col = if('$searchcriteria$' == "s_user", user, path) | eval search_col = if($searchcriteria$ == 's_user', user, path) | eval search_col = if('$searchcriteria$' == 's_user', user, path) Even tried  | eval search_col = if(s_user == s_user, user, path) The value of search_col is the same as path.  I have tested, and the value of the $searchcriteria$ is getting set properly. What am I doing wrong?  
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the a... See more...
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the alert and episode link and you the have to again search for the events and check the details.   Is there a way to get the link to the episode directly taht a person can open without searching from the ist of the events?   the link to specific episode e.g. https://splunkcloud.com/en-US/app/itsi/itsi_event_management?tab=layout_1&emid=1sdfdff-3cd3-11f0-b7a7-44561c0a81024&earliest=%40d&latest=now&tabid=all-events when opened in separate window does not open that specific episode the above url is modified to not share the exact url for the episode.  
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log... See more...
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log                                                                                                                                                               \ExploitPrevention_Activity.log                                                                                                                                                                \OnDemandScan_Activity.log                                                                                                                                                                 \SelfProtection_Activity.log   My stanza in the input.conf are configured as:   [monitor://C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log disabled = 0 index = winlogs sourcetype = WinEventLog:HIPS start_from = oldest current_only = 0 checkpointInterval = 5 renderXel = false   Same format for each log. For some reason Splunk is not ingesting the log data.