All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@splunklearner  Hmm okay, it matches via https://regex101.com/r/ZZw8Lv/1 - must be something else, I'll keep digging. Did chatgpt have any other suggestions!?  Did this answer help you? If so, p... See more...
@splunklearner  Hmm okay, it matches via https://regex101.com/r/ZZw8Lv/1 - must be something else, I'll keep digging. Did chatgpt have any other suggestions!?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@livehybrid already checked the same via chatgpt and applied but no luck.
Hi @JohnGregg  I believe the 400 bad request is being caused by the specific params being sent, Are you able to print any logging to see what the start/end time is being requested? If there is an is... See more...
Hi @JohnGregg  I believe the 400 bad request is being caused by the specific params being sent, Are you able to print any logging to see what the start/end time is being requested? If there is an issue where the start time > now(), start_time = end_time, start_time > end_time or some other invalid timing then it could cause the issue - If you're running in a loop like this then it can be easy to get into these problems without realising.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  Could this is an issue with the LINE_BREAKER, try the following which includes a negative lookahead for the date: LINE_BREAKER=([\r\n]+)(?=[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d... See more...
Hi @splunklearner  Could this is an issue with the LINE_BREAKER, try the following which includes a negative lookahead for the date: LINE_BREAKER=([\r\n]+)(?=[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s)   Can I just check, you said you have the props/transforms on the Indexer, is this data sent from a UF or HF? If its a HF then you'll need to deploy it there too.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_viola... See more...
Jun 26 13:46:12 128.23.84.166 [local0.err] <131>Jun 26 13:46:12 GBSDFA1AD011HMA.systems.uk.fed ASM:f5_asm=PROD vs_name="/f5-tenant-01/XXXXXXXX" violations="HTTP protocol compliance failed" sub_violations="HTTP protocol compliance failed:Header name with no header value" attack_type="HTTP Parser Attack" violation_rating="3/5" severity="Error" support_id="XXXXXXXXX" policy_name="/Common/waf-fed-transparent" enforcement_action="none" dest_ip_port="128.155.6.2:443" ip_client="128.163.192.44" x_forwarded_for_header_value="N/A" method="POST" uri="/auth-service/api/v2/token/refreshAccessToken" microservice="N/A" query_string="N/A" response_code="500" sig_cves="N/A" sig_ids="N/A" sig_names={N/A} sig_set_names="N/A" staged_sig_cves="N/A" staged_sig_ids="N/A" staged_sig_names="N/A" staged_sig_set_names="N/A" <?xml version='1.0' encoding='UTF-8'?> <BAD_MSG> <violation_masks> <block>0-0-0-0</block> <alarm>2400500004500-106200000003e-0-0</alarm> <learn>0-0-0-0</learn> <staging>0-0-0-0</staging> </violation_masks> <request-violations> <violation> <viol_index>14</viol_index> <viol_name>VIOL_HTTP_PROTOCOL</viol_name> <http_sanity_checks_status>2</http_sanity_checks_status> <http_sub_violation_status>2</http_sub_violation_status> <http_sub_violation>SGVhZGVyICdBdXRob3JpemF0aW9uJyBoYXMgbm8gdmFsdWU=</http_sub_violation> </violation> </request-violations> </BAD_MSG>​ Jul 3 11:12:48 128.168.189.4 [local0.err] <131>2025-07-03T11:12:48+00:00 nginxplus-nginx-ingress-controller-6947cb4744-hxwf5 ASM:Log_details\x0a\x0avs_name="14-cyberwasp-sv-busybox.ikp3001ynp.cloud.uk.fed:10-/"\x0aviolations="Attack signature detected"\x0asub_violations="N/A"\x0aattack_type="Cross Site Scripting (XSS)"\x0aviolation_rating="5/5"\x0aseverity="N/A"\x0a\x0asupport_id="14096019979554169061"\x0apolicy_name="waf-fed-enforced"\x0aenforcement_action="block"\x0a\x0adest_ip_port="0.0.0.0:443"\x0aip_client="128.175.220.223"\x0ax_forwarded_for_header_value="N/A"\x0a\x0amethod="GET"\x0auri="/"\x0amicroservice="N/A"\x0aquery_string="svanga=%3Cscript%3Ealert(1)%3C/script%3E%22"\x0aresponse_code="0"\x0a\x0asig_cves="N/A,N/A,N/A,N/A"\x0asig_ids="200001475,200000098,200001088,200101609"\x0asig_names={XSS script tag end (Parameter) (2),XSS script tag (Parameter),alert() (Parameter)...}\x0asig_set_names="{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{High Accuracy Signatures;Cross Site Scripting Signatures;Generic Detection Signatures (High Accuracy)},{Cross Site Scripting Signatures}..."\x0astaged_sig_cves="N/A,N/A,N/A,N/A"\x0astaged_sig_ids="N/A"\x0astaged_sig_names="N/A"\x0astaged_sig_set_names="N/A"\x0a\x0a<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>400500200500-1a01030000000032-0-0</block><alarm>20400500200500-1ef903400000003e-7400000000000000-0</alarm><learn>0-0-0-0</learn><staging>0-0-0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>parameter</context><parameter_data><value_error/><enforcement_level>global</enforcement_level><name>c3Zhbmdh</name><value>PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0PiI=</value><location>query</location><expected_location></expected_location><is_base64_decoded>false</is_base64_decoded><param_name_pattern>*</param_name_pattern><staging>0</staging></parameter_data><staging>0</staging><sig_data><sig_id>200001475</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>8</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200000098</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>7</length></kw_data></sig_data><sig_data><sig_id>200001088</sig_id><blocking_mask>2</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>15</offset><length>6</length></kw_data></sig_data><sig_data><sig_id>200101609</sig_id><blocking_mask>3</blocking_mask><kw_data><buffer>c3ZhbmdhPTxzY3JpcHQ+YWxlcnQoMSk8L3NjcmlwdD4i</buffer><offset>7</offset><length>25</length></kw_data></sig_data></violation></request-violations></BAD_MSG> We have already implemented some platform logs in Splunk and this is the format we have for it (1st XML)   and the props.conf we have written for this in indexer -  [abcd] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false   props.conf on search head -   [abcd] REPORT-xml_kv_extract = bad_msg_xml, bad_msg_xml_kv   transforms.conf   [bad_msg_xml] REGEX = (?ms)<BAD_MSG>(.*?)<\/BAD_MSG> FORMAT = Bad_Msg_Xml::$1 [bad_msg_xml_kv] SOURCE_KEY = Bad_Msg_Xml REGEX = (?ms)<(\w*)>([^<]*)<\/\1> FORMAT = $1::$2 MV_ADD = true   Now we are applying same logic for the  raw data (attached above in 2nd XML format) and now it is not at all working in readable format --     Sometimes single event is coming as multi event. for example response code coming as one event method is coming as another event which is not supposed to be. Please help me with props and transforms modifications. We need data to be in the format I have given initially
Unfortunately it is still not working as I am working with a consistent list of multiple areas, and descriptions. Are there other approaches that I might try out? Thank you
@_pravin  I don't think SH unresponsive issue is because of config issue on your Indexer Discovery. With your classic approach(static indexers list) the indexers themselves remained reachable and re... See more...
@_pravin  I don't think SH unresponsive issue is because of config issue on your Indexer Discovery. With your classic approach(static indexers list) the indexers themselves remained reachable and responsive to the SH for search and metadata operations. The SH normally becomes slow or unresponsive when it cannot communicate with the indexers for distributed search/query. You can set up alerts on the license master to notify you as you approach your daily license limit. If you must block data, do so at the forwarder level (disable outputs or even disabling FW port if possible(not recommended)) Also you can consider using null queue to drop data at HF. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is ma... See more...
Hi @PrewinThomas , Thanks for your response. We used to have a classic way of connecting the forwarder to peers, and we had the same port flip technique, but didn't have the SH slowness. This is making me doubt if I am missing something in the configuration of the indexer discovery.  Also, to get back on the same issue, I always tried to check if the SH responds, but on the flipside, I never checked the indexers UI. So should it be the case where all the UI should fail due to cluster instability, and not just the SH? But when this was the case, I tried the SH restart, but of no use. To address license overages at source, reducing the ingestion or increasing the license is not possible at this stage because overages are a rare one-off scenario, but Splunk's license enforcement is something which I can do. Is there a way I can cut off the data when I am approaching a license breach? One more important thing to notice is how the Splunk license works. Splunk logs license through the _internal index, and meter gets data based on license.log, but if the license.log file is being indexed late with a time delay, still the license gets updated for the _time of the data, and not the indextime. Any thoughts on this process @livehybrid @PrewinThomas  Thanks, Pravin
I have tried this method customizing inside app and install using Install App. But i am getting JavaScript error message.
Hi @livehybrid , The SH sends its internal logs to the indexers as well, but we used to have a classic way of connecting the forwarder earlier, but didn't have the SH slowness. This is making me dou... See more...
Hi @livehybrid , The SH sends its internal logs to the indexers as well, but we used to have a classic way of connecting the forwarder earlier, but didn't have the SH slowness. This is making me doubt if I am missing something in the configuration of the indexer discovery. To get back to your comment, when you say SH starts queuing data, is there an intermediate queue in the SH, or maybe any forwarder to store data when it's unable to connect to the indexers (data layer). Thanks, Pravin
Hello,   Yes case sensitivity was an issue. I could only get the command to run in CLI if SC4S was in upper case but if it is Upper or Lower case in the script it always exits 1regardless of runnin... See more...
Hello,   Yes case sensitivity was an issue. I could only get the command to run in CLI if SC4S was in upper case but if it is Upper or Lower case in the script it always exits 1regardless of running state.
@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to comm... See more...
@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to communicate with those indexers, waiting for timeouts and retries. This causes high network activity, resource exhaustion, or unresponsive. If indexers are unresponsive, the SH waits for each connection to time out, which can block UI and search activity eventually. I would suggest not to use port flipping as a workaround. This destabilizes the cluster and SH. Instead, address license overages at the source (reduce ingestion, increase license, or use Splunk’s enforcement). Also for a quick workaround is to restart SH, which will clear the active connections. But the best approach is to address license issues at the source rather than blocking ingestion. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start... See more...
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start queueing data, it sounds like this queueing could be slowing down your SHs. I think the best thing to do would be to focus on reducing your license ingest- are there any data sources which are unused or could be trimmed down?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we do... See more...
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we don't have any issues. Still, when the indexer is not accepting any connections (sometimes when we are overusing the license, we flip the input port on indexers to xxxx, not to receive any accepted data from forwarders), the network activity (read /write) on the Splunk Search Head is taking a hit. The Search Head becomes completely unusable at this point. Has anyone faced a similar issue like this, or am I missing any setting during the setup of Indexer discovery? Thanks, Pravin
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises... See more...
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/introduction-to-splunk-soar-on-premises/pair-splunk-soar-on-premises-with-splunk-enterprise-security-on-premises   If this is the case and if: * you are entering the IP & credentials correct * and there is no connectivity issue.   Then it is most likely SSL certificate validation issue. And your error also suggests the same thing.   You can follow the document to fix it - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/manage-splunk-soar-on-premises-certificate-store/update-or-renew-ssl-certificates-for-nginx-rabbitmq-or-consul#f311597b_e8dd_40f2_9844_a62f90ffc64c__Updating_the_SSL_certificates * Please kindly understand SSL certificates well before you apply this on Production to avoid any issues.   I hope this helps!!! Kindly upvote if it does!!!
@L_Petch Can you make sure your service name filter is not having case sensitivity issue. Try, #!/bin/bash # Check if the container named 'sc4s' is running if ! podman ps --filter "name=sc4s" --fi... See more...
@L_Petch Can you make sure your service name filter is not having case sensitivity issue. Try, #!/bin/bash # Check if the container named 'sc4s' is running if ! podman ps --filter "name=sc4s" --filter "status=running" | grep sc4s > /dev/null 2>&1; then exit 1 fi # Optionally, check if the service inside the container is healthy (e.g., port 514) if ! ss -ltn | grep ':514 ' > /dev/null 2>&1; then exit 1 fi exit 0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid    Apologies I got side tracked by another project and just coming back to this. I am stuck and cannot get this check script to work. I have even tried a simpler "if else " statement... See more...
Hi @livehybrid    Apologies I got side tracked by another project and just coming back to this. I am stuck and cannot get this check script to work. I have even tried a simpler "if else " statement but don't seem to get the desired affect. If I change your script to echo rather than exit I get the echo reply I am expecting if the sc4s container state is started or stopped but as soon as it is using exit statements it always exits 1. Below is what I have done, hoping you can spot my mistake. ----------sc4s_health_check_script.sh---------- Only change is making sc4s upper case # Example keepalived health check script #!/bin/bash # Check if SC4S container is running podman ps --filter "name=SC4S" --filter "status=running" | grep SC4S > /dev/null 2>&1 || exit 1 # Check if syslog port (e.g., 514) is listening ss -ltn | grep ':514 ' > /dev/null 2>&1 || exit 1 exit 0   ----------Keepalived.conf---------- vrrp_script sc4s_health_check { script "/etc/keepalived/sc4s_health_check_script.sh" interval 2 weight -20 } track_script { sc4s_health_check }   This is what I get from keepalived and no matter what I change it always returns 1  Keepalived_vrrp[11414]: VRRP_Script(sc4s_health_check) succeeded Keepalived_vrrp[11414]: Script `sc4s_health_check` now returning 1 Keepalived_vrrp[11414]: VRRP_Script(sc4s_health_check) failed (exited with status 1) keepalived_vrrp[11414]: (VI_1) Changing effective priority from 254 to 234  
Thank you very much for the information. Currently, we are forwarding logs using the Universal Forwarder ( inputs.conf: start_from = oldest checkpointInterval = 5 evt_resolve_ad_obj = 1 wec_event... See more...
Thank you very much for the information. Currently, we are forwarding logs using the Universal Forwarder ( inputs.conf: start_from = oldest checkpointInterval = 5 evt_resolve_ad_obj = 1 wec_event_format = raw_event and the settings Splunk Indexer: props.conf:  SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = EventTime= TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 25 We will plan to test the TA Windows.
Probably the easiest thing would be to switch from "traditional" format to XML logging. Now you're gathering a plain-text representation of windows events. This format has some limitations (one you'v... See more...
Probably the easiest thing would be to switch from "traditional" format to XML logging. Now you're gathering a plain-text representation of windows events. This format has some limitations (one you've just hit) and on average consumes more license than the XML-formatted events. So you might simply want to switch to renderXml=true sourcetype=XmlWinEventLog in your eventlog inputs. Provided that you're using TA-windows, the rest should happen automagically. This will of course only affect events ingested after the change.
Just wtiting "solved" is not very helpful to others. Please provide us with more details - what was the cause of the problem and how you fixed it so that others can benefit from your experience in th... See more...
Just wtiting "solved" is not very helpful to others. Please provide us with more details - what was the cause of the problem and how you fixed it so that others can benefit from your experience in the future. Remember that Answers is about colaboration - sometimes others help you, sometimes you help them. Everybody wins.