All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First of all thanks for your reply. I ran the following search in Splunk: index="wazuh-alerts" "data.vulnerability.severity"="Medium" | stats count I also tested for other severity levels like "Hi... See more...
First of all thanks for your reply. I ran the following search in Splunk: index="wazuh-alerts" "data.vulnerability.severity"="Medium" | stats count I also tested for other severity levels like "High" and "Low," but the result was always 0. This indicates that no vulnerability detection events are being indexed in Splunk. Even though other types of data are coming through, there are currently no events where data.vulnerability.severity is populated with "High," "Medium," or "Low." It suggests that either: Vulnerability Detector is not generating results, The events are not being forwarded to Splunk properly, Or the events are being indexed but under a different sourcetype, index, or field structure. Would appreciate any guidance on how to dig deeper into this!
Hi @Alan_Chan , this transformation is unuseful on the SH, but it must be localized in the first HF that dara pass trhough, are you sure that you applied it in the first HF? Then check if the sourc... See more...
Hi @Alan_Chan , this transformation is unuseful on the SH, but it must be localized in the first HF that dara pass trhough, are you sure that you applied it in the first HF? Then check if the sourcetype where you associated the SEDCMD command is the correct one, and that there isn't any transformation on this sourcetype. Then, are you sure that is useful to remove these few chars? Ciao. Giuseppe
Hi @kunalsingh , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I am trying to remove everything before the { character to preserve the JSON format. I am using SEDCMD-keepjson = s/^[^{]{/{/ in the sourcetype configuration, but it fails to apply correctly. However... See more...
I am trying to remove everything before the { character to preserve the JSON format. I am using SEDCMD-keepjson = s/^[^{]{/{/ in the sourcetype configuration, but it fails to apply correctly. However, when I use the search command | rex mode=sed "s/^[^{]{/{/", it successfully removes the unwanted text. I am wondering what could be causing this issue. The sourcetype settings are configured on both the Search Head (SH) and Heavy Forwarder (HF) Mar 28 13:11:57 abcdeabcdev01w.abcdabcd.local {<json_log>}  
Hi @kunalsingh  Use a REPORT transform in props.conf and transforms.conf to define the field extractions based on your delimiters. ==props.conf== [your_sourcetype] # Replace your_sourcetype with t... See more...
Hi @kunalsingh  Use a REPORT transform in props.conf and transforms.conf to define the field extractions based on your delimiters. ==props.conf== [your_sourcetype] # Replace your_sourcetype with the actual sourcetype of your data REPORT-kv_pairs = extract_custom_kv ==transforms.conf== [extract_custom_kv] REGEX = ([^=\^]+)=([^\^]*) FORMAT = $1::$2 MV_ADD = true This configuration defines a field extraction named extract_custom_kv. REGEX = ([^=\^]+)=([^\^]*): This regular expression finds key-value pairs separated by =. ([^=\^]+) captures the key (any character except = or ^). = matches the literal equals sign. ([^\^]*) captures the value (any character except ^, including an empty string). This correctly handles fields like documentName= where the value is empty. FORMAT = $1::$2: This assigns the captured key (group 1) and value (group 2) to a Splunk field. MV_ADD = true: Ensures that if multiple key-value pairs are found in a single event, they are all extracted. Check a working example at https://regex101.com/r/yAjRVa/1 This method correctly identifies the ^ character as the delimiter between pairs and = as the separator within a pair, handling empty values appropriately. The regex you provided, ^([^=]+)=([^^\]), likely failed because the ^ anchor restricts it to the start of the string, and the character class [^^\*] might not behave as expected compared to [^\^].   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @luminousplumz  You need to apply the mqtttopic transform before the mqtttojson transform overwrites the _raw field. The order in TRANSFORMS-* matters. Also, adjust the mqtttopic regex and format... See more...
Hi @luminousplumz  You need to apply the mqtttopic transform before the mqtttojson transform overwrites the _raw field. The order in TRANSFORMS-* matters. Also, adjust the mqtttopic regex and format for correct field extraction. transforms.conf: [mqtttojson] REGEX = msg\=(.+) FORMAT = $1 DEST_KEY = _raw [mqtttopic] # Extract from the original _raw field containing 'topic=' REGEX = topic=tgw\/data\/0x155f\/([^\/]+) FORMAT = Topic::$1 WRITE_META = true props.conf: [mqtttojson_ubnpfc_all] # Apply mqtttopic first, then mqtttojson TRANSFORMS-topic_then_json = mqtttopic, mqtttojson # The rest of your props.conf settings remain the same DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = \"ts\": TZ = Europe/London category = Custom pulldown_type = 1 # Ensure KV_MODE=none if you don't want Splunk's default key-value extraction # KV_MODE = none # Ensure JSON extraction runs after transforms if needed # INDEXED_EXTRACTIONS = json Transform Order: The TRANSFORMS-topic_then_json line in props.conf should have mqtttopic first. This ensures it runs on the original event data before mqtttojson overwrites _raw with the JSON payload. mqtttopic REGEX: The regex topic=tgw\/data\/0x155f\/([^\/]+) specifically looks for the topic= string, skips the known prefix tgw/data/0x155f/, and captures the next segment of characters that are not a forward slash (/) into capture group 1. mqtttopic FORMAT: FORMAT = Topic::$1 creates a new field named Topic containing the value captured by the regex (the desired topic segment, e.g., "TransportContextTracking"). mqtttopic WRITE_META: WRITE_META = true ensures the extracted field (Topic) is written to the index metadata, making it available for searching even though the original _raw field is later overwritten. mqtttojson: This transform runs second. It extracts the JSON part from the msg= field (which still exists in the original event data at this stage) and overwrites _raw with just the JSON content. Splunk's automatic JSON parsing (or INDEXED_EXTRACTIONS = json) will then parse this new _raw. Some useful tips: Restart the Splunk instance or reload the configuration for changes in props.conf and transforms.conf to take effect. Ensure the sourcetype mqtttojson_ubnpfc_all is correctly assigned to your MQTT data input. Test the regex using Splunk's rex command in search or on regex testing websites against your raw event data to confirm it captures the correct value. If Splunk's automatic key-value extraction interferes before your transforms run, you might need KV_MODE = none in props.conf. If Splunk isn't automatically parsing the final JSON _raw, add INDEXED_EXTRACTIONS = json to your props.conf stanza. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
| rex "(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})(?=:\d+)"
I would like to extract an ip address from a text field where the ip address has a trailing port number. The text is like so:  X-Upstream:"11.111.11.11:81" The extraction would provide only the ip ... See more...
I would like to extract an ip address from a text field where the ip address has a trailing port number. The text is like so:  X-Upstream:"11.111.11.11:81" The extraction would provide only the ip address. Thanks.
Hi @luminousplumz, For index-time field extractions, you want something like this (note the order of the transforms in the TRANSFORMS-mqtt setting): # fields.conf [sourcetype::mqtttojson_ubnpfc_al... See more...
Hi @luminousplumz, For index-time field extractions, you want something like this (note the order of the transforms in the TRANSFORMS-mqtt setting): # fields.conf [sourcetype::mqtttojson_ubnpfc_all::Topic] INDEXED = true # props.conf [mqtttojson_ubnpfc_all] TRANSFORMS-mqtt = mqtttopic,mqtttojson # transforms.conf [mqtttojson] CLEAN_KEYS = 0 DEST_KEY = _raw FORMAT = $1 REGEX = msg=(.+) [mqtttopic] CLEAN_KEYS = 0 FORMAT = Topic::$1 REGEX = topic=(?:[^/]*/){3}([^/]+) WRITE_META = true  For search-time field extractions, you want something like this: [mqtttojson_ubnpfc_all] EXTRACT-Topic = topic=(?:[^/]*/){3}(?<Topic>[^/]+) EVAL-_raw = replace(_raw, ".*? msg=", "")  However, in the search-time configuration, you'll need to extract the JSON fields in a search as automatic key-value field extraction happens before calculated fields (EVAL-*): sourcetype=mqtttojson_ubnpfc_all | spath You'll note that the original name, event_id, topic, and msg (value possibly truncated) fields are automatically extracted before the full value of msg is assigned to _raw.
Hi @kunalsingh , please try this: \^([^\=]+)=([^\^]*) Ciao. Giuseppe
Hi @SPL_Dummy , no, you can set the rendexXml option true or false for an input and not for a part of it. To use this Correlation Search, create a new one clonit it and modifying the sourcetype con... See more...
Hi @SPL_Dummy , no, you can set the rendexXml option true or false for an input and not for a part of it. To use this Correlation Search, create a new one clonit it and modifying the sourcetype contained in the macros. Ciao. Giuseppe
Hi @ranafge  The first thing I would try here is opening some of the searches in the dashboards in Search (Click the little magnifying glass) and check for any errors.  If still no results then you... See more...
Hi @ranafge  The first thing I would try here is opening some of the searches in the dashboards in Search (Click the little magnifying glass) and check for any errors.  If still no results then you can try removing various parts of the search to see if there is a particular line which is causing the issue, if this happens let us know the specifics and we can work out what the issue is.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and runni... See more...
Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and running correctly, and other data appears to be coming through, but the vulnerability detection events are missing. I've verified that: Wazuh services are running properly without critical errors. Vulnerability Detector is enabled in the Wazuh configuration (ossec.conf). Wazuh agents are reporting other types of events successfully. Despite this, no vulnerability data appears in the dashboard. Could someone guide me on how to troubleshoot this? Any advice on checking Wazuh modules, Splunk sourcetypes, indexes, or forwarder configurations would be highly appreciated. Thank you in advance for your support!
If you really dont want to go down the multisite route then you could keep your RF at 3 and slowly introduce new indexers in the new location, offlining one from the old site as each new one is added... See more...
If you really dont want to go down the multisite route then you could keep your RF at 3 and slowly introduce new indexers in the new location, offlining one from the old site as each new one is added, although I really would recommend the multisite approach personally... Here is what you would do: Add all 3 new indexers to Site B while keeping Site A indexers active Wait for full data replication to new indexers (verify with) Gracefully decommission Site A indexers one at a time, waiting full rebalancing of buckets before doing the next one - splunk offline -auth <admin>:<password> Cluster automatically rebalances data to maintain RF=3 during decommissioning Why This Works as an approach Maintains RF=3 compliance throughout Avoids dangerous RF reduction step Uses Splunk's built-in rebalancing for safe peer removal   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @m_zandinia  I dont think upping your RF to 4 is the right approach here, you should probably treat this as a single-to-multisite cluster migration, even if you are going to deprecate the old sit... See more...
Hi @m_zandinia  I dont think upping your RF to 4 is the right approach here, you should probably treat this as a single-to-multisite cluster migration, even if you are going to deprecate the old site afterwards. There are some useful docs at https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Migratetomultisite covering thism, Multisite clustering makes the cluster aware of physical locations (sites). You configure site-specific replication and search factors (site_replication_factor / site_search_factor). Add the new Site B indexers to the cluster, assigning them to a new site (site2). Configure the cluster manager for multisite operation, specifying that you need copies and/or searchable copies on both site1 (your existing Site A) and site2 (your new Site B). Configure the manager to convert legacy buckets to multisite The cluster manager automatically replicates data between sites to satisfy these policies, ensuring Site B receives a complete copy of the data over time. Once Site B has a full copy and the cluster is stable, you can safely decommission Site A indexers by updating the site policies, putting Site A peers in detention, waiting for buckets to be fixed, and then removing them. Directly decreasing the Replication Factor (RF) when indexers holding necessary copies are offline can lead to data loss because the cluster manager may still believe those hosts exist. Migrating data between sites using multisite replication takes time and network bandwidth. Monitor the cluster status closely (`splunk show cluster-status --verbose` or the Monitoring Console) to ensure replication completes before decommissioning the old site. Plan your site_replication_factor and site_search_factor carefully based on your desired redundancy and search availability during and after the migration. Useful Documentation Links: Multisite indexer clusters Decommission a site Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physi... See more...
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physically move the existing data — I will simply have 3 new indexers at Site B. Here’s the plan I’m considering: Launch 1 new indexer at Site B. Add the new indexer to the existing cluster. Increase the RF to 4 (so that all raw data is fully replicated across the available indexers). Shut down all 3 indexers at Site A. Decrease the RF back to 3. (I understand there is a risk of some data loss.) Add 2 additional new indexers to the cluster at Site B. My main concern is step 5 — decreasing the RF — which I know is not best practice, but given my situation, I don't have many options. Has anyone encountered a similar situation before? I'd appreciate any advice, lessons learned, or other options I might not be aware of. Thanks in advance!
Hi @Dy4  There can only be a single Submit button which should be enabled in the fieldset by setting submitButton=true in the <fieldset> tag.   Did this answer help you? If so, please consider: ... See more...
Hi @Dy4  There can only be a single Submit button which should be enabled in the fieldset by setting submitButton=true in the <fieldset> tag.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @raomu  You need to correct your `outputs.conf` configuration as you have a duplicate stanza name "[tcpout: splunkonprem]" and you haven't defined the "splunkcloud" output group. Additionally, t... See more...
Hi @raomu  You need to correct your `outputs.conf` configuration as you have a duplicate stanza name "[tcpout: splunkonprem]" and you haven't defined the "splunkcloud" output group. Additionally, the defaultGroup setting in the [tcpout] stanza determines where data goes if an outputgroup is not specified in inputs.conf. To send only "dd-log-token2" data to both destinations and all other data only to On-Prem (as implied by your goal), configure outputs.conf [tcpout] # Data without a specific outputgroup goes here defaultGroup = splunkonprem forceTimebasedAutoLB = true [tcpout:splunkonprem] # Your On-Prem indexers server = zyx.com:9997, abc.com:9997 [tcpout:splunkcloud] # Your Splunk Cloud forwarder endpoint server = <your_splunk_cloud_inputs_endpoints>:9997 Add other relevant settings like compressed=true, useACK=true if needed and  any required Splunk Cloud specific settings (e.g., sslCertPath, sslPassword if using certs) inputs.conf on Heavy Forwarder [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX # This overrides defaultGroup and sends to both outputgroup = splunkonprem, splunkcloud [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX    Explanation: outputs.conf/[tcpout]/defaultGroup: Sets the default destination(s) for data that doesn't have a specific outputgroup assigned in inputs.conf. In this corrected example, data defaults to "splunkonprem" only. outputs.conf/[tcpout:groupname]: Defines named output groups. You need one stanza for each group (`splunkonprem` and `splunkcloud`) with the correct server details. Stanza names must be unique. inputs.conf/[stanza]/outputgroup: Assigns data from that specific input stanza to the listed output group(s), overriding the defaultGroup. The setting "outputgroup = splunkonprem, splunkcloud" sends data from [http://dd-log-token2/] to both defined groups. Further Troubleshooting: Can you see your Splunk Forwarder establishing a connection to Splunk Cloud successfully? We need to rule out connection issues to Splunk Cloud which arent related to the outputgroup. Check the $SPLUNK_HOME/var/log/splunk/splunkd.log for errors setting up the connection. Ensure the Splunk Cloud inputs endpoint (`<your_splunk_cloud_inputs_endpoints>:9997`) is correct for your stack. There are often ~12 input servers listed. Verify network connectivity (firewall rules) from the Heavy Forwarder to both your On-Prem indexers and the Splunk Cloud inputs endpoint on port 9997. Restart the Splunk forwarder service after applying configuration changes. Useful Docs: outputs.conf: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf inputs.conf (HTTP Event Collector section): https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 Forward data based on source, sourcetype, or host: https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_input_configuration   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
No. Those settings are per input so you can have just one set of settings for each separate event log. What you could try though (but I'm not sure if the inputs can handle them) is creating a view o... See more...
No. Those settings are per input so you can have just one set of settings for each separate event log. What you could try though (but I'm not sure if the inputs can handle them) is creating a view of the event log and ingesting events from that view using another input. But as I said, I have no clue if this'll work.
1. outputgroup = <string> * The name of the output group to which the event collector forwards data. * There is no support for using this setting to send data over HTTP with a heavy forwarder. 2. F... See more...
1. outputgroup = <string> * The name of the output group to which the event collector forwards data. * There is no support for using this setting to send data over HTTP with a heavy forwarder. 2. For cloud you don't send to 9997. 3. You can't use http output and normal s2s output at the same time.