All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @livehybrid  looks like the json is from Vector agent to Kafka, that's why we may end up with json or is it possible to convert json to raw log in Splunk?
Hi @Pete_  The splunkclouduf.spl app configures secure forwarding to Splunk Cloud; you should not need to modify outputs.conf directly, also, because you're able to see the new forwarders in the Clo... See more...
Hi @Pete_  The splunkclouduf.spl app configures secure forwarding to Splunk Cloud; you should not need to modify outputs.conf directly, also, because you're able to see the new forwarders in the Cloud Monitoring Console (CMC) we know that the outputs are established and the new UFs can reach Splunk Cloud. The testing you've done shows the 514 syslog feed arriving at the box, however is Splunk listening on that port? If you run the following can you see that splunkd is listening to the port? sudo netstat -tulnp | grep 514 Are there any logs in $SPLUNK_HOME/var/log/splunk/splunkd.log about binding port 514, any errors etc?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require tu... See more...
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require turning off and unplugging each device. Indexers are clustered with a Replication Factor of 2. From what I have read: I can issue the 'splunk offline' command on the indexer I am working on Wait for the indexer to wrap up any tasks Then shut down and unplug the machine to perform this upgrade Once complete, I can plug it back in and turn it back on (make sure Splunk starts running again) Am i missing anything important? Thanks!
Hello, I am having issues getting data into Splunk Cloud with two new Universal forwarders. I have two existing Universal Forwarders that are working just fine, but I am migrating these to new serv... See more...
Hello, I am having issues getting data into Splunk Cloud with two new Universal forwarders. I have two existing Universal Forwarders that are working just fine, but I am migrating these to new servers. Same Universal Forwarder version on both the old and new servers (9.4.3) I have the Universal Forwader software installed on both the new Linux servers. I copied the inputs.conf and outputs.conf files from the old servers. I also installed splunkclouduf.spl that I downloaded from my Splunk Cloud instance. The usage for these forwarders is limited to syslog messages only. I receive syslog messages from other devices on port 514 of the Universal Forwarders (UDP and TCP allowed) and those messages forward to Splunk Cloud. Pretty simple setup. I have confirmed that traffic is being received on the servers on port 514 using tcpdump. However, none of that traffic is reaching Splunk Cloud. I can see the new forwarders in the Splunk Cloud Monitoring Console under Forwarders->Versions and Forwarders->Instance. But no data is being received from the new forwarders. Below are my inputs.conf and outputs.conf files from one of the new servers. As you can see, very simple setup and outputs.conf is doing nothing. Again, these were copied from my old working servers exactly, except for the hostname on the new forwarders. ---------------------------------------- inputs.conf  [default] host = NHC-NETSplunkForwarder [tcp://514] acceptFrom = * connection_host=ip index=nhcnetwork sourcetype=NETWORK disabled=0 [udp://514] acceptFrom = * connection_host=ip index=nhcnetwork sourcetype=NETWORK ---------------------------------------- outputs.conf (sanitized) #This breaks stuff. The credentials package provides what is needed here. Leave commented out. #[tcpout] #defaultGroup = splunkcloud,default-autolb-group #[tcpout:default-autolb-group] #server = XXXXXXX.splunkcloud.com:9997 #disabled = false #[tcpout-server://XXXXXXX.splunkcloud.com:9997] Do I need to do something in Splunk Cloud to allow these new forwarders to send data? I don't know how splunkclouduf.spl works so I don't know a way to monitor output traffic from the Universal Forwarder. Any suggestions or tips are appreciated. Thanks, -Pete  
Can you also confirm, is the data coming from a UF? I saw you put that the conf was on the Indexers but if its being sent from a Heavy Forwarder it will need to be there too. Is this a regular monit... See more...
Can you also confirm, is the data coming from a UF? I saw you put that the conf was on the Indexers but if its being sent from a Heavy Forwarder it will need to be there too. Is this a regular monitor:// input?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@splunklearner  Hmm okay, it matches via https://regex101.com/r/ZZw8Lv/1 - must be something else, I'll keep digging. Did chatgpt have any other suggestions!?  Did this answer help you? If so, p... See more...
@splunklearner  Hmm okay, it matches via https://regex101.com/r/ZZw8Lv/1 - must be something else, I'll keep digging. Did chatgpt have any other suggestions!?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@livehybrid already checked the same via chatgpt and applied but no luck.
Hi @JohnGregg  I believe the 400 bad request is being caused by the specific params being sent, Are you able to print any logging to see what the start/end time is being requested? If there is an is... See more...
Hi @JohnGregg  I believe the 400 bad request is being caused by the specific params being sent, Are you able to print any logging to see what the start/end time is being requested? If there is an issue where the start time > now(), start_time = end_time, start_time > end_time or some other invalid timing then it could cause the issue - If you're running in a loop like this then it can be easy to get into these problems without realising.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  Could this is an issue with the LINE_BREAKER, try the following which includes a negative lookahead for the date: LINE_BREAKER=([\r\n]+)(?=[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d... See more...
Hi @splunklearner  Could this is an issue with the LINE_BREAKER, try the following which includes a negative lookahead for the date: LINE_BREAKER=([\r\n]+)(?=[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s)   Can I just check, you said you have the props/transforms on the Indexer, is this data sent from a UF or HF? If its a HF then you'll need to deploy it there too.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_viola... See more...
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_violations="HTTP protocol compliance failed:Header name with no header value" attack_type="HTTP Parser Attack" violation_rating="3/5" severity="Error" support_id="XXXXXXXXX" policy_name="/Common/waf-fed-transparent" enforcement_action="none" dest_ip_port="128.155.6.2:443" ip_client="128.163.192.44" x_forwarded_for_header_value="N/A" method="POST" uri="/auth-service/api/v2/token/refreshAccessToken" microservice="N/A" query_string="N/A" response_code="500" sig_cves="N/A" sig_ids="N/A" sig_names={N/A} sig_set_names="N/A" staged_sig_cves="N/A" staged_sig_ids="N/A" staged_sig_names="N/A" staged_sig_set_names="N/A" <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>0-0-0-0</block> <alarm>2400500004500-106200000003e-0-0</alarm> <learn>0-0-0-0</learn> <staging>0-0-0-0</staging> </violation_masks> <request-violations> <violation> <viol_index>14</viol_index> <viol_name>VIOL_HTTP_PROTOCOL</viol_name> <http_sanity_checks_status>2</http_sanity_checks_status> <http_sub_violation_status>2</http_sub_violation_status> <http_sub_violation>SGVhZGVyICdBdXRob3JpemF0aW9uJyBoYXMgbm8gdmFsdWU=</http_sub_violation> </violation> </request-violations> </BAD_MSG>​ Jul 3 11:12:48 128.168.189.4 [local0.err] <131>2025-07-03T11:12:48+00:00 nginxplus-nginx-ingress-controller-6947cb4744-hxwf5 ASM:Log_details\x0a\x0avs_name="14-cyberwasp-sv-busybox.ikp3001ynp.cloud.uk.fed:10-/"\x0aviolations="Attack signature detected"\x0asub_violations="N/A"\x0aattack_type="Cross Site Scripting (XSS)"\x0aviolation_rating="5/5"\x0aseverity="N/A"\x0a\x0asupport_id="14096019979554169061"\x0apolicy_name="waf-fed-enforced"\x0aenforcement_action="block"\x0a\x0adest_ip_port="0.0.0.0:443"\x0aip_client="128.175.220.223"\x0ax_forwarded_for_header_value="N/A"\x0a\x0amethod="GET"\x0auri="/"\x0amicroservice="N/A"\x0aquery_string="svanga=%3Cscript%3Ealert(1)%3C/script%3E%22"\x0aresponse_code="0"\x0a\x0asig_cves="N/A,N/A,N/A,N/A"\x0asig_ids="200001475,200000098,200001088,200101609"\x0asig_names={XSS script tag end (Parameter) (2),XSS script tag (Parameter),alert() (Parameter)...}\x0asig_set_names="{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{Cross Site Scripting Signatures}..."\x0astaged_sig_cves="N/A,N/A,N/A,N/A"\x0astaged_sig_ids="N/A"\x0astaged_sig_names="N/A"\x0astaged_sig_set_names="N/A"\x0a\x0a<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>400500200500-1a01030000000032-0-0</block><alarm>20400500200500-1ef903400000003e-7400000000000000-0</alarm><learn>0-0-0-0</learn><staging>0-0-0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>parameter</context><parameter_data><value_error/><enforcement_level>global</enforcement_level><name>c3Zhbmdh</name><value>PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0PiI=</value><location>query</location><expected_location></expected_location><is_base64_decoded>false</is_base64_decoded><param_name_pattern>*</param_name_pattern><staging>0</staging></parameter_data><staging>0</staging><sig_data><sig_id>200001475</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>8</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200000098</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200001088</sig_id><blocking_mask>2</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>15</offset><length>6</length></kw_data></sig_data><sig_data><sig_id>200101609</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>25</length></kw_data></sig_data></violation></request-violations></BAD_MSG> We have already implemented some platform logs in Splunk and this is the format we have for it (1st XML)   and the props.conf we have written for this in indexer -  [abcd] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   props.conf on search head -   [abcd] REPORT-xml_kv_extract = bad_msg_xml, bad_msg_xml_kv   transforms.conf   [bad_msg_xml] REGEX = (?ms)<BAD_MSG>(.*?)<\/BAD_MSG> FORMAT = Bad_Msg_Xml::$1 [bad_msg_xml_kv] SOURCE_KEY = Bad_Msg_Xml REGEX = (?ms)<(\w*)>([^<]*)<\/\1> FORMAT = $1::$2 MV_ADD = true   Now we are applying same logic for the  raw data (attached above in 2nd XML format) and now it is not at all working in readable format --     Sometimes single event is coming as multi event. for example response code coming as one event method is coming as another event which is not supposed to be. Please help me with props and transforms modifications. We need data to be in the format I have given initially
Unfortunately it is still not working as I am working with a consistent list of multiple areas, and descriptions. Are there other approaches that I might try out? Thank you
@_pravin  I don't think SH unresponsive issue is because of config issue on your Indexer Discovery. With your classic approach(static indexers list) the indexers themselves remained reachable and re... See more...
@_pravin  I don't think SH unresponsive issue is because of config issue on your Indexer Discovery. With your classic approach(static indexers list) the indexers themselves remained reachable and responsive to the SH for search and metadata operations. The SH normally becomes slow or unresponsive when it cannot communicate with the indexers for distributed search/query. You can set up alerts on the license master to notify you as you approach your daily license limit. If you must block data, do so at the forwarder level (disable outputs or even disabling FW port if possible(not recommended)) Also you can consider using null queue to drop data at HF. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is ma... See more...
Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is making me doubt if I am missing something in the configuration of the indexer discovery.  Also, to get back on the same issue, I always tried to check if the SH responds, but on the flipside, I never checked the indexers UI. So should it be the case where all the UI should fail due to cluster instability, and not just the SH? But when this was the case, I tried the SH restart, but of no use. To address license overages at source, reducing the ingestion or increasing the license is not possible at this stage because overages are a rare one-off scenario, but Splunk's license enforcement is something which I can do. Is there a way I can cut off the data when I am approaching a license breach? One more important thing to notice is how the Splunk license works. Splunk logs license through the _internal index, and meter gets data based on license.log, but if the license.log file is being indexed late with a time delay, still the license gets updated for the _time of the data, and not the indextime. Any thoughts on this process @livehybrid @PrewinThomas  Thanks, Pravin
I have tried this method customizing inside app and install using Install App. But i am getting JavaScript error message.
Hi @livehybrid , The SH sends its internal logs to the indexers as well, but we used to have a classic way of connecting the forwarder earlier, but didn't have the SH slowness. This is making me dou... See more...
Hi @livehybrid , The SH sends its internal logs to the indexers as well, but we used to have a classic way of connecting the forwarder earlier, but didn't have the SH slowness. This is making me doubt if I am missing something in the configuration of the indexer discovery. To get back to your comment, when you say SH starts queuing data, is there an intermediate queue in the SH, or maybe any forwarder to store data when it's unable to connect to the indexers (data layer). Thanks, Pravin
Hello,   Yes case sensitivity was an issue. I could only get the command to run in CLI if SC4S was in upper case but if it is Upper or Lower case in the script it always exits 1regardless of runnin... See more...
Hello,   Yes case sensitivity was an issue. I could only get the command to run in CLI if SC4S was in upper case but if it is Upper or Lower case in the script it always exits 1regardless of running state.
@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to comm... See more...
@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to communicate with those indexers, waiting for timeouts and retries. This causes high network activity, resource exhaustion, or unresponsive. If indexers are unresponsive, the SH waits for each connection to time out, which can block UI and search activity eventually. I would suggest not to use port flipping as a workaround. This destabilizes the cluster and SH. Instead, address license overages at the source (reduce ingestion, increase license, or use Splunk’s enforcement). Also for a quick workaround is to restart SH, which will clear the active connections. But the best approach is to address license issues at the source rather than blocking ingestion. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start... See more...
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start queueing data, it sounds like this queueing could be slowing down your SHs. I think the best thing to do would be to focus on reducing your license ingest- are there any data sources which are unused or could be trimmed down?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we do... See more...
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we don't have any issues. Still, when the indexer is not accepting any connections (sometimes when we are overusing the license, we flip the input port on indexers to xxxx, not to receive any accepted data from forwarders), the network activity (read /write) on the Splunk Search Head is taking a hit. The Search Head becomes completely unusable at this point. Has anyone faced a similar issue like this, or am I missing any setting during the setup of Indexer discovery? Thanks, Pravin
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises... See more...
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/introduction-to-splunk-soar-on-premises/pair-splunk-soar-on-premises-with-splunk-enterprise-security-on-premises   If this is the case and if: * you are entering the IP & credentials correct * and there is no connectivity issue.   Then it is most likely SSL certificate validation issue. And your error also suggests the same thing.   You can follow the document to fix it - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/manage-splunk-soar-on-premises-certificate-store/update-or-renew-ssl-certificates-for-nginx-rabbitmq-or-consul#f311597b_e8dd_40f2_9844_a62f90ffc64c__Updating_the_SSL_certificates * Please kindly understand SSL certificates well before you apply this on Production to avoid any issues.   I hope this helps!!! Kindly upvote if it does!!!