All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following S... See more...
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following SPL: index=tenable sourcetype="tenable:io:vuln" | eval lag = _indextime - _time We are seeing non-zero lag values, even though I expect the lag to be zero if _time truly equals _indextime. If anything, I would expect DATETIME_CONFIG = CURRENT, what am I missing?
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's inst... See more...
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's installed in a 9.4.1 Splunk Enterprise environment.  
Help me with splunk query to monitor CPU and Memory utilized by splunk adhoc and alert searches
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just pl... See more...
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just plain connecting one environment to the other.
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the comm... See more...
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the command SPLUNK_CMD check-integrity -index "$INDEX" for all indexes. But that's where the problem comes from. The data we keep collecting in real time is that running a command during check-integrity fails.  ( ex linux_os logs, window_os logs) results are like this result server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security disableSSLShutdown=0 Setting search process to have long life span: enable_search_process_long_lifespan=1 certificateStatusValidationMethod is not set, defaulting to none. Splunk is starting with EC-SSC disabled CMIndexId: New indexName=linux_os inserted, mapping to id=1 Operating on: idx=linux_os bucket='/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0' Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0, Reason=Journal has no hashes. Operating on: idx=_audit bucket='/opt/splunk/var/lib/splunk/linux_os/db/hot_v1_1' Total buckets checked=2, succeeded=1, failed=1 Loaded latency_tracker_log_interval with value=30 from stanza=health_reporter Loaded aggregate_ingestion_latency_health with value=1 from stanza=health_reporter aggregate_ingestion_latency_health with value=1 from stanza=health_reporter will enable the aggregation of ingestion latency health reporter. Loaded ingestion_latency_send_interval_max with value=86400 from stanza=health_reporter Loaded ingestion_latency_send_interval with value=30 from stanza=health_reporter Is there a way to solve these problems?
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] ... See more...
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] Failed to run hadoop_setup [2]. Fix errors and re-run again. Error in running /opt/caspida/bin/Caspida setuphadoop on 192.168.126.16. Fix errors and re-run again. i execute the command /opt/caspida/bin/caspida setup but stopped here hadoopsetup can't run, i don't know the cause yet. someone please help me. I put some install logs here
hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query a... See more...
hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query action in SOAR to pull additional_action, If additional_action contains “teardown,” route the playbook down a specific branch. tstats summariesonly=true fillnull_value="unknown" values(All_Traffic.src) as src values(All_Traffic.dest) as dest values(All_Traffic.additional_action) as additional_action values(All_Traffic.status_action) as status_action values(All_Traffic.app) as app count from datamodel="Network_Traffic"."All_Traffic" WHERE (All_Traffic.src_ip IN ({0})) AND (All_Traffic.dest_ip IN ({1})) AND (All_Traffic.dest_port="{2}") by All_Traffic.session_id | nomv additional_action if I use the query there is a teardown result i have added field additional_action  but the result from playbook is Parameter: {"comment":"Protocol value None , mohon untuk dilakukan analisa kembali.   is there any way to solve this problem 
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "E... See more...
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "Enable indexer acknowledgment" option. Enabled HEC globally in Splunk Web. Enabled SSL (HTTPS). Restarted the Splunk instance after configuration. However, the integration is still not connecting. I'm receiving the following error:  
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable,... See more...
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable, however there does not seem to be a lot of (or really any) details on this setup. 
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are succ... See more...
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are successfully receiving both syslog and NetFlow data. However, it seems that the data model "Cisco_SDWAN" associated with the syslog data is not functioning correctly, which is likely causing the dashboards to fail. We've already performed extensive troubleshooting without success. Has anyone encountered a similar issue or can offer guidance on resolving the data model problem? Splunk Enterprise Security  Cisco Catalyst SD-WAN App for Splunk  and Cisco Catalyst SD-WAN Add-on for Splunk 
Hi, I have this very simple splunk search query and i was able to run in splunk search portal or UI and I am using the same search query API (using the same query but in the form of encoded URL) - w... See more...
Hi, I have this very simple splunk search query and i was able to run in splunk search portal or UI and I am using the same search query API (using the same query but in the form of encoded URL) - what is the issue? I am getting total number of events as 164 in splunk portal but when i run the same query which is transted into encoded URL through python script i am getting 157 records/rows only... since this search is only for yesterday iam using earliest=-1d@d latest=-0d@d index=App001_logs sourcetype="App001_logs_st" earliest=-1d@d latest=-0d@d organization IN ("InternalApps","ExternalApps") AppclientId="ABC123" status_code=200 environment="UAT" | table _time, AppclientId,organization,environment,proxyBasePath,api_name the same exact query which is translated in encoded URL like https:// whole search query and when i run the python script in my desktop (my time zone is CST) i get only 157 records/rows I think there is something going on UTC and CST - this is what i see in splunk portal 164 events (5/30/25 12:00:00.000 AM to 5/31/25 12:00:00.000 AM) any guidance please?
I am trying to repeat line chart for multiple host selection. Each line chart should display the cpu usage for each selected hosts separately. Here is my full source code in Dashboard studio. { ... See more...
I am trying to repeat line chart for multiple host selection. Each line chart should display the cpu usage for each selected hosts separately. Here is my full source code in Dashboard studio. { "visualizations": { "viz_gsqlcpsd": { "type": "splunk.line", "dataSources": { "primary": "ds_xcdWhjuu" }, "title": "${selected_server:-All Servers} - CPU Usage %" } }, "inputs": { "input_VtWuBSik": { "options": { "items": [ { "label": "All", "value": "*" }, { "label": "host123", "value": "host123" }, { "label": "host1234", "value": "host1234" } ], "defaultValue": [ "*" ], "token": "selected_server" }, "title": "server", "type": "input.multiselect" }, "input_mj9iUMvw": { "options": { "defaultValue": "-15m,now", "token": "tr_hMOOrvcD" }, "title": "Time Range Input Title", "type": "input.timerange" } }, "layout": { "type": "grid", "globalInputs": [ "input_VtWuBSik", "input_mj9iUMvw" ], "options": { "backgroundColor": "transparent" }, "structure": [ { "item": "viz_gsqlcpsd", "type": "repeating", "repeatFor": { "input": "input_VtWuBSik" }, "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ] }, "dataSources": { "ds_xcdWhjuu": { "type": "ds.search", "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" }, "query": "index=host_metrics measurement=cpu_time \r\n| search url IN($selected_server$) OR url=\"default_server\"\r\n| eval state_filter=if(match(state, \"^(idle|interrupt|nice|softirq|steal|system|user|wait)$\"), 1, 0)\r\n| where state_filter = 1\r\n| sort 0 _time url cpu state\r\n| streamstats current=f last(counter) as prev by url cpu state\r\n| eval delta = counter - prev\r\n| where delta >= 0\r\n| bin _time span=1m\r\n| eventstats sum(delta) as total by _time, url, cpu\r\n| eval percent = round((delta / total) * 100, 2)\r\n| eval url_state = url . \"_\" . state \r\n| timechart span=1m avg(percent) by url_state\r\n| foreach * [eval <<FIELD>> = round('<<FIELD>>', 2)]" }, "name": "CPU_Util_Search_1" } }, "title": "Test_Multi Line chart" }  
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested li... See more...
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list. Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue. On-prem Splunk version 9.4.0 and above Indexes.conf [default] maxMemMB=100 Server.conf [general] autoAdjustQueue=true ( It can be applied on on any splunk instance UF/HF/SH/IDX) Splunk version 9.1 to 9.3.x Indexes.conf [default] maxMemMB=100 maxConcurrentOptimizes=2 maxRunningProcessGroups=32 processTrackerServiceInterval=0 Server.conf [general] parallelIngestionPipelines = 4 [queue=indexQueue] maxSize=500MB [queue=parsingQueue] maxSize=500MB [queue=httpInputQ] maxSize = 500MB maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd). maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1. maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto. processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change. parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines. maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size.
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested ... See more...
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list. Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue. On-prem Splunk version 9.4.0 and above Indexes.conf [default] maxMemMB=100 Server.conf [general] autoAdjustQueue=true Splunk version 9.1 to 9.3.x Indexes.conf [default] maxMemMB=100 maxConcurrentOptimizes=2 maxRunningProcessGroups=32 processTrackerServiceInterval=0 Server.conf [general] parallelIngestionPipelines = 4 [queue=indexQueue] maxSize=500MB [queue=parsingQueue] maxSize=500MB [queue=httpInputQ] maxSize = 500MB maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd). maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1. maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto. processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change. parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines. maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size queue.
Hello Splunkers, I have a question around Splunk Architecture, would greatly appreciate the inputs from Architects. In a Scenario where UF on log source>Heavy Forwarder>Indexer Basically  A Univer... See more...
Hello Splunkers, I have a question around Splunk Architecture, would greatly appreciate the inputs from Architects. In a Scenario where UF on log source>Heavy Forwarder>Indexer Basically  A Universal Forwarder get installed on a log source with a configuration to connect to Deployment server, Once it connects to DS, the DS will push the Output APP & the corresponding technology add-on i.e. Windows/Linux to the Universal Forwarder. The Output APP on the Log source(UF) is basically forwarding to heavy forwarder over standard port 9997 On the Heavy Forwarder an output APP under etc/apps  is there to forward to indexers. So the question is do I need to also have an Windows_TA/Linux TA app on heavy forwarder ? is it necessary ? if I dont install a TA , my understanding is heavy forwarder should still forward everything it receives over port 9997(without a TA/inputs.conf) to the next Splunk , is that correct ? Sorry I know its long reading but I hope to receive some responses. Thank you ,   regards, Moh
We have the following sourcetypes that come through Tenable Add-On for Splunk - tenable:io:assets tenable:io:plugin tenable:io:audit_logs Is there any app/dashboard that presents this data?
Splunk Add-on for Windows is well-known and I am using it to parse my XmlWinEventLog. However, upon using, I am getting EventCode as a duplicated codes in multiline, like this: 4688 4688 I think I ... See more...
Splunk Add-on for Windows is well-known and I am using it to parse my XmlWinEventLog. However, upon using, I am getting EventCode as a duplicated codes in multiline, like this: 4688 4688 I think I could find the reason, as in the transforms.conf, there are 2 function for detecting EventCode: [EventID_as_EventCode] SOURCE_KEY = EventID REGEX = (.+) FORMAT = EventCode::$1 [EventID2_as_EventCode] REGEX = <EventID.*?>(.+?)<\/EventID>.* FORMAT = EventCode::$1 And in the props.conf, both function is called: REPORT-EventCode_from_xml = EventID_as_EventCode, EventID2_as_EventCode However, I have never seen someone mentioned this issue, so is this because of my log? My log is the XML WinEventLog like this: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'> <System> <Provider Name='Microsoft-Windows-Security-Auditing' Guid='{68ad733a-0b7e-4010-a246-bad643c2e4c1}' /> <EventID>4688</EventID> <Version>2</Version> <Level>0</Level> <Task>13312</Task> <Opcode>0</Opcode> <Keywords>0x8020000000000000</Keywords> <TimeCreated SystemTime='2025-05-30T10:55:19.179279400Z' /> <EventRecordID>25849216</EventRecordID> <Correlation /> <Execution ProcessID='4' ThreadID='7780' /> <Channel>Security</Channel> <Computer>ABCD-DE01.company.domain</Computer> <Security /> </System> <EventData> <Data Name='SubjectUserSid'>S-1-5-18</Data> <Data Name='SubjectUserName'>ABCD-DE01$</Data> <Data Name='SubjectDomainName'>COMPANY.DOMAIN</Data> <Data Name='SubjectLogonId'>0x3e7</Data> <Data Name='NewProcessId'>0x1c48</Data> <Data Name='NewProcessName'>C:\Windows\System32\net1.exe</Data> <Data Name='TokenElevationType'>%%1936</Data> <Data Name='ProcessId'>0x2a2c</Data> <Data Name='CommandLine'>C:\Windows\system32\net1 accounts</Data> <Data Name='TargetUserSid'>S-1-0-0</Data> <Data Name='TargetUserName'>-</Data> <Data Name='TargetDomainName'>-</Data> <Data Name='TargetLogonId'>0x0</Data> <Data Name='ParentProcessName'>C:\Windows\System32\net.exe</Data> <Data Name='MandatoryLabel'>S-1-16-16384</Data> </EventData> </Event>  The result of this is that the functions called below, using EventCode, cannot match the EventCode, like this one: EVAL-process_name = if(EventCode=4688, New_Process_Name, Process_Name)
I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  -... See more...
I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  - installed Stream app and add-on on Splunk Enterprise server, add-on is installed on Windows DCs Troubleshooting: - when I go into the Stream app, it runs the set up and I get an error: Unable to establish connection to /en-us/custom/splunk_app_stream/ping/: End of file. Note: I am able to ping splunk server from DNS server and port 8000 is open on the Splunk server firewall. - when I go into Configure Streams, DNS is enabled - on the DNS server, /etc/apps/Splunk_TA_stream/local/inputs.conf file contains splunk_stream_app_location = https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ - on the DNS server, /etc/apps/Splunk_TA_stream/default/streamsfwd.conf file contains [streamfwd] port = 8889 ipAddr = 127.0.0.1
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be a... See more...
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be able to export.
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constrain... See more...
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constraints determine the first part of the search through Simple search filters (Root event datasets and all child datasets). Complex search strings (Root search datasets). transaction definitions (Root transaction datasets). In my new data model I try to make a new dataset constraint which will try to select only unique field  eventId. EventId is a number, ie.123456. My goal is to drop duplicated log lines. Is it possible to define this kind of data set constraint?