All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and runni... See more...
Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and running correctly, and other data appears to be coming through, but the vulnerability detection events are missing. I've verified that: Wazuh services are running properly without critical errors. Vulnerability Detector is enabled in the Wazuh configuration (ossec.conf). Wazuh agents are reporting other types of events successfully. Despite this, no vulnerability data appears in the dashboard. Could someone guide me on how to troubleshoot this? Any advice on checking Wazuh modules, Splunk sourcetypes, indexes, or forwarder configurations would be highly appreciated. Thank you in advance for your support!
If you really dont want to go down the multisite route then you could keep your RF at 3 and slowly introduce new indexers in the new location, offlining one from the old site as each new one is added... See more...
If you really dont want to go down the multisite route then you could keep your RF at 3 and slowly introduce new indexers in the new location, offlining one from the old site as each new one is added, although I really would recommend the multisite approach personally... Here is what you would do: Add all 3 new indexers to Site B while keeping Site A indexers active Wait for full data replication to new indexers (verify with) Gracefully decommission Site A indexers one at a time, waiting full rebalancing of buckets before doing the next one - splunk offline -auth <admin>:<password> Cluster automatically rebalances data to maintain RF=3 during decommissioning Why This Works as an approach Maintains RF=3 compliance throughout Avoids dangerous RF reduction step Uses Splunk's built-in rebalancing for safe peer removal   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @m_zandinia  I dont think upping your RF to 4 is the right approach here, you should probably treat this as a single-to-multisite cluster migration, even if you are going to deprecate the old sit... See more...
Hi @m_zandinia  I dont think upping your RF to 4 is the right approach here, you should probably treat this as a single-to-multisite cluster migration, even if you are going to deprecate the old site afterwards. There are some useful docs at https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Migratetomultisite covering thism, Multisite clustering makes the cluster aware of physical locations (sites). You configure site-specific replication and search factors (site_replication_factor / site_search_factor). Add the new Site B indexers to the cluster, assigning them to a new site (site2). Configure the cluster manager for multisite operation, specifying that you need copies and/or searchable copies on both site1 (your existing Site A) and site2 (your new Site B). Configure the manager to convert legacy buckets to multisite The cluster manager automatically replicates data between sites to satisfy these policies, ensuring Site B receives a complete copy of the data over time. Once Site B has a full copy and the cluster is stable, you can safely decommission Site A indexers by updating the site policies, putting Site A peers in detention, waiting for buckets to be fixed, and then removing them. Directly decreasing the Replication Factor (RF) when indexers holding necessary copies are offline can lead to data loss because the cluster manager may still believe those hosts exist. Migrating data between sites using multisite replication takes time and network bandwidth. Monitor the cluster status closely (`splunk show cluster-status --verbose` or the Monitoring Console) to ensure replication completes before decommissioning the old site. Plan your site_replication_factor and site_search_factor carefully based on your desired redundancy and search availability during and after the migration. Useful Documentation Links: Multisite indexer clusters Decommission a site Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physi... See more...
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physically move the existing data — I will simply have 3 new indexers at Site B. Here’s the plan I’m considering: Launch 1 new indexer at Site B. Add the new indexer to the existing cluster. Increase the RF to 4 (so that all raw data is fully replicated across the available indexers). Shut down all 3 indexers at Site A. Decrease the RF back to 3. (I understand there is a risk of some data loss.) Add 2 additional new indexers to the cluster at Site B. My main concern is step 5 — decreasing the RF — which I know is not best practice, but given my situation, I don't have many options. Has anyone encountered a similar situation before? I'd appreciate any advice, lessons learned, or other options I might not be aware of. Thanks in advance!
Hi @Dy4  There can only be a single Submit button which should be enabled in the fieldset by setting submitButton=true in the <fieldset> tag.   Did this answer help you? If so, please consider: ... See more...
Hi @Dy4  There can only be a single Submit button which should be enabled in the fieldset by setting submitButton=true in the <fieldset> tag.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @raomu  You need to correct your `outputs.conf` configuration as you have a duplicate stanza name "[tcpout: splunkonprem]" and you haven't defined the "splunkcloud" output group. Additionally, t... See more...
Hi @raomu  You need to correct your `outputs.conf` configuration as you have a duplicate stanza name "[tcpout: splunkonprem]" and you haven't defined the "splunkcloud" output group. Additionally, the defaultGroup setting in the [tcpout] stanza determines where data goes if an outputgroup is not specified in inputs.conf. To send only "dd-log-token2" data to both destinations and all other data only to On-Prem (as implied by your goal), configure outputs.conf [tcpout] # Data without a specific outputgroup goes here defaultGroup = splunkonprem forceTimebasedAutoLB = true [tcpout:splunkonprem] # Your On-Prem indexers server = zyx.com:9997, abc.com:9997 [tcpout:splunkcloud] # Your Splunk Cloud forwarder endpoint server = <your_splunk_cloud_inputs_endpoints>:9997 Add other relevant settings like compressed=true, useACK=true if needed and  any required Splunk Cloud specific settings (e.g., sslCertPath, sslPassword if using certs) inputs.conf on Heavy Forwarder [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX # This overrides defaultGroup and sends to both outputgroup = splunkonprem, splunkcloud [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX    Explanation: outputs.conf/[tcpout]/defaultGroup: Sets the default destination(s) for data that doesn't have a specific outputgroup assigned in inputs.conf. In this corrected example, data defaults to "splunkonprem" only. outputs.conf/[tcpout:groupname]: Defines named output groups. You need one stanza for each group (`splunkonprem` and `splunkcloud`) with the correct server details. Stanza names must be unique. inputs.conf/[stanza]/outputgroup: Assigns data from that specific input stanza to the listed output group(s), overriding the defaultGroup. The setting "outputgroup = splunkonprem, splunkcloud" sends data from [http://dd-log-token2/] to both defined groups. Further Troubleshooting: Can you see your Splunk Forwarder establishing a connection to Splunk Cloud successfully? We need to rule out connection issues to Splunk Cloud which arent related to the outputgroup. Check the $SPLUNK_HOME/var/log/splunk/splunkd.log for errors setting up the connection. Ensure the Splunk Cloud inputs endpoint (`<your_splunk_cloud_inputs_endpoints>:9997`) is correct for your stack. There are often ~12 input servers listed. Verify network connectivity (firewall rules) from the Heavy Forwarder to both your On-Prem indexers and the Splunk Cloud inputs endpoint on port 9997. Restart the Splunk forwarder service after applying configuration changes. Useful Docs: outputs.conf: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf inputs.conf (HTTP Event Collector section): https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 Forward data based on source, sourcetype, or host: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_input_configuration   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
No. Those settings are per input so you can have just one set of settings for each separate event log. What you could try though (but I'm not sure if the inputs can handle them) is creating a view o... See more...
No. Those settings are per input so you can have just one set of settings for each separate event log. What you could try though (but I'm not sure if the inputs can handle them) is creating a view of the event log and ingesting events from that view using another input. But as I said, I have no clue if this'll work.
1. outputgroup = <string> * The name of the output group to which the event collector forwards data. * There is no support for using this setting to send data over HTTP with a heavy forwarder. 2. F... See more...
1. outputgroup = <string> * The name of the output group to which the event collector forwards data. * There is no support for using this setting to send data over HTTP with a heavy forwarder. 2. For cloud you don't send to 9997. 3. You can't use http output and normal s2s output at the same time.
We have heavy forwarder that accept logs over HEC.  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX X... See more...
We have heavy forwarder that accept logs over HEC.  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX ________________________________ I want to forward only below inputs to 2 different splunk Instances - 1- splunkCloud (hosted by Splunk) 2-SplunkOnPrem  [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX   ________________________________ This is my inputs.conf looks like  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX outputgroup = splunkonprem, splunkcloud [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX _____________ outputs.conf  [tcpout] defaultgroup = splunkonprem,splunkcloud  forceTimebasedAutoLB = true  [tcpout: splunkonprem] server= zyx.com:9997, abc.com:9997 [tcpout: splunkonprem] server= mmm.com:9997, bbb.com:9997 But these settings are only sending logs to Onprem indexers not to SplunkCloud indexers. Please suggest if any idea whats wrong with my configuration.
This just makes things confusing - why do the RPM and DEB versions (both x86 and ARM) and Windows of v9.3.3 have build hash `75595d8f83ef`, but when you look at the solaris UFs, the build hash is  `7... See more...
This just makes things confusing - why do the RPM and DEB versions (both x86 and ARM) and Windows of v9.3.3 have build hash `75595d8f83ef`, but when you look at the solaris UFs, the build hash is  `740e48416363` ?! What gives? This just makes our lives more difficult when trying to organize large-scale downloads for users in a heterogeneous environment...
The documentation that you refer to has them both in the same stanza, in steps 3 and 4: Break and reassemble the data stream into events This method oftentimes simplifies the configuration process,... See more...
The documentation that you refer to has them both in the same stanza, in steps 3 and 4: Break and reassemble the data stream into events This method oftentimes simplifies the configuration process, as it gives you access to several settings that you can use to define line-merging rules. You must perform these steps on the heavy forwarder that you have designated to send data to your Splunk Cloud Platform instance. On the forwarder that is to send data to your Splunk Cloud Platform instance, use a text editor to open $SPLUNK_HOME/etc/system/local/props.conf for editing. In this file, specify a stanza in the props.conf configuration file that represents the stream of data you want to break and reassemble into events. In that stanza, configure the LINE_BREAKER setting with a regular expression that breaks the data stream into multiple lines. Add the SHOULD_LINEMERGE setting to the stanza, and set its value to true. Configure additional line-merging settings, such as BREAK_ONLY_BEFORE and others, to specify how the forwarder is to reassemble the lines into events. For more information on the line-merging settings, see Attributes that apply only when the SHOULD_LINEMERGE setting is true later in this topic. If your data conforms well to the default LINE_BREAKER value, which is any number of newlines and carriage returns, you don't need to change the LINE_BREAKER setting. Instead, set SHOULD_LINEMERGE=true and use the line-merging settings to reassemble the data.
Splunk gives validation warnings that unknown node submit not allowed here. Is there's any fixes for this <form version="1.1" theme="dark"> <!-- Fieldset for dropdown input --> <fieldset submitBut... See more...
Splunk gives validation warnings that unknown node submit not allowed here. Is there's any fixes for this <form version="1.1" theme="dark"> <!-- Fieldset for dropdown input --> <fieldset submitButton="true" autoRun="true"> <input type="dropdown" token="A_or_B" searchWhenChanged="false"> <label>Select A or B</label> <default>A</default> <choice value="A">A</choice> <choice value="B">B</choice> </input> </fieldset> <!-- Submit block, should be placed directly inside form --> <submit> <condition match="$A_or_B$ == &quot;A&quot;"> <set token="tokenX">1</set> <set token="tokenY">2</set> </condition> <condition match="$A_or_B$ == &quot;B&quot;"> <set token="tokenX">3</set> <set token="tokenY">4</set> </condition> </submit> Microsoft 365 App for Splunk 
Following for I need to do this soon as well... hope you figure it out so I can
Short question: can I configure my window UF inputs.conf to collect Security Event logs as renderXML=false , unless it is EventCode=4662, if EventCode=4662 then I want renderXML=true inputs.conf fi... See more...
Short question: can I configure my window UF inputs.conf to collect Security Event logs as renderXML=false , unless it is EventCode=4662, if EventCode=4662 then I want renderXML=true inputs.conf file [WinEventLog://Security] disabled = 0 index = wineventlog start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false #(if EventCode=4662 then set renderXML=true   I read maybe a transform.conf would help with this...?   Explanation for this configuration request is so that I can utilized this Search for DCSync attacks provided by Enterprise Splunk Security, of which only seems to work with XML ingested Security Event 4662... : ESCU - Windows AD Replication Request Initiated by User Account - Rule `wineventlog_security` EventCode=4662 ObjectType IN ("%{19195a5b-6da0-11d0-afd3-00c04fd930c9}","domainDNS") AND Properties IN ("*Replicating Directory Changes All*", "*{1131f6ad-9c07-11d1-f79f-00c04fc2dcd2}*","*{9923a32a-3607-11d2-b9be-0000f87a36b2}*","*{1131f6ac-9c07-11d1-f79f-00c04fc2dcd2}*") AND AccessMask="0x100" AND NOT (SubjectUserSid="NT AUT*" OR SubjectUserSid="S-1-5-18" OR SubjectDomainName="Window Manager" OR SubjectUserName="*$") | stats min(_time) as _time, count by SubjectDomainName, SubjectUserName, Computer, Logon_ID, ObjectName, ObjectServer, ObjectType, OperationType, status dest | rename SubjectDomainName as Target_Domain, SubjectUserName as user, Logon_ID as TargetLogonId, _time as attack_time | appendpipe [| map search="search `wineventlog_security` EventCode=4624 TargetLogonId=$TargetLogonId$" | fields - status] | table attack_time, AuthenticationPackageName, LogonProcessName, LogonType, TargetUserSid, Target_Domain, user, Computer, TargetLogonId, status, src_ip, src_category, ObjectName, ObjectServer, ObjectType, OperationType, dest | stats min(attack_time) as _time values(TargetUserSid) as TargetUserSid, values(Target_Domain) as Target_Domain, values(user) as user, values(Computer) as Computer, values(status) as status, values(src_category) as src_category, values(src_ip) as src_ip by TargetLogonId dest
I configured the DEBUG logs and will hopefully have more to go off next week. One thing I thought I'd bring up is some confusion regarding the Certificate Profile, Subject Alternative Names (SANs) an... See more...
I configured the DEBUG logs and will hopefully have more to go off next week. One thing I thought I'd bring up is some confusion regarding the Certificate Profile, Subject Alternative Names (SANs) and Extended Key Usage we should be specifying when we send our CSRs to the high-side PKI portal.  Certificate Profiles We have a few certificate profiles to choose from, including: device, domain controller, device/TLS/Application Email, Mini Crypto Key Agreement, Encrypted File System, IPSEC, Mini Crypro Authentication, Robotic Process (RPA/BO), and what seemed to be the most fitting, TLS Server.  Subject Alternative Names (loopback?) From there, we've been defining the Subject Alternative Name which makes the most sense, the device IP dress. However, I'm being told we should be using 127.0.0.1 instead - what's your take on that? Key Usage/Extended Key Usage Selections When selecting the TLS Server certificate profile, the default Key Usages are selected:  Key Usage: digitalSignature keyEncipherment Extended Key Usage: id-kp-serverAuth I'm being told that we need to include Extended Keys for smartCardAuth and possibly  id-kp-clientAuth. The problem is that when we select additional keys for the TLS Server profile the portal, instead automatically approving the CSR, kicks it back with the following error: "An extended key usage was found that requires the certificate application to be queued. EKU for "smarCardLogon" for profile "tlsServer" requires the certificate application to be queued"  This MASSIVELY slows down the troubleshooting process, making it quite difficult to iteratively troubleshoot (not Splunk's problem, but I needed to lament the hardship). I know this is a LOT and truly appreciate everyone's help on this. Some my peers have been trying to figure this our for over a year! If we can figure this out it'd be a massive win for whole bunch of burnt out sys admins     goal/tl:dr - confirm the minimal, PKI-approved certificate profile, SAN list, and EKU set that Splunk Web needs for CAC / SIPR-token authentication, so we can eliminate certificate-format variables from our troubleshooting.  Thanks again!
@livehybrid  if i remove [] its just creating dummy logs with path its not actually searching result. like below 4/23/25 9:36:05.515 AM   04-23-25 09:36:05,515 [3820] DEBUG Common <> - S... See more...
@livehybrid  if i remove [] its just creating dummy logs with path its not actually searching result. like below 4/23/25 9:36:05.515 AM   04-23-25 09:36:05,515 [3820] DEBUG Common <> - Started thread  FileSource = *\\test\\abc\\test\\xyz\\abc\\123\\ITEM\\* eventtype = exclude_known_allowed host = test index = test linecount = 1
Hi @Ana_Smith1  Both of those apps are Archived and unsupported - I would suggest looking at https://splunkbase.splunk.com/app/6168 which is still in active development by the developer. This will a... See more...
Hi @Ana_Smith1  Both of those apps are Archived and unsupported - I would suggest looking at https://splunkbase.splunk.com/app/6168 which is still in active development by the developer. This will allow you to run JQL queries to pull back your data, you could set this up as 10 different inputs for your 10 projects, or write a JQL query to pull all 10 back in one go.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sanjai  How odd - which version of Splunk are you running and on what OS/Arch? If you navigate to $SPLUNK_HOME/etc/apps/search/appserver/static/ do you have a appIcon.png ?  Did this answer ... See more...
Hi @sanjai  How odd - which version of Splunk are you running and on what OS/Arch? If you navigate to $SPLUNK_HOME/etc/apps/search/appserver/static/ do you have a appIcon.png ?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @CarlosNoob  If you want to be able to update apps from within your Splunk server's apps list then you need to enable the server to access https://apps.splunk.com/  which is details in server.con... See more...
Hi @CarlosNoob  If you want to be able to update apps from within your Splunk server's apps list then you need to enable the server to access https://apps.splunk.com/  which is details in server.conf. If you want the update notifications, *or to access docs* linked from various parts of Splunk then the server needs to be able to access http://quickdraw.splunk.com - this is detailed in web.conf here. Note - Splunk HF/Enterprise does not have the ability to update itself, it can only notify you of an update. You would need to download the packages from https://splunk.com/download  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Gravedigging for karma / answering questions left unanswered: This would be set in most cases on the receiving indexers props.conf.  And you are very right, in no case should ../default/ be mod... See more...
Gravedigging for karma / answering questions left unanswered: This would be set in most cases on the receiving indexers props.conf.  And you are very right, in no case should ../default/ be modified. Create a ../local/ or a whole new app if you'd like with a ../local/props.conf If all you do is modify a local props.conf by adding a sourcetype stanza with TZ, there's no requirement to restart splunk, it should detect that change on its own.