All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to comm... See more...
@_pravin  If i understood you correctly, you flip indexer receiving port whenever you have license overage and your SH becomes unresponsive. SH becomes unresponsive, because it keeps trying to communicate with those indexers, waiting for timeouts and retries. This causes high network activity, resource exhaustion, or unresponsive. If indexers are unresponsive, the SH waits for each connection to time out, which can block UI and search activity eventually. I would suggest not to use port flipping as a workaround. This destabilizes the cluster and SH. Instead, address license overages at the source (reduce ingestion, increase license, or use Splunk’s enforcement). Also for a quick workaround is to restart SH, which will clear the active connections. But the best approach is to address license issues at the source rather than blocking ingestion. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start... See more...
Hi @_pravin  This isnt really a suitable way to manage your excessive license usage. If your SH are configured to send their internal logs to the Indexers (which they should be) then they will start queueing data, it sounds like this queueing could be slowing down your SHs. I think the best thing to do would be to focus on reducing your license ingest- are there any data sources which are unused or could be trimmed down?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we do... See more...
Hi Splunkers, I have a Splunk cluster with 1 SH, 1 CM and HF, and 3 indexers. The CM setup is configured to connect forwarders and SH using indexer discovery. All of this setup works well when we don't have any issues. Still, when the indexer is not accepting any connections (sometimes when we are overusing the license, we flip the input port on indexers to xxxx, not to receive any accepted data from forwarders), the network activity (read /write) on the Splunk Search Head is taking a hit. The Search Head becomes completely unusable at this point. Has anyone faced a similar issue like this, or am I missing any setting during the setup of Indexer discovery? Thanks, Pravin
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises... See more...
@Alan_Chan - Here is what got to understand: * You are using SOAR on 8443 port. * You are trying to connect SOAR from Splunk ES as per this - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/introduction-to-splunk-soar-on-premises/pair-splunk-soar-on-premises-with-splunk-enterprise-security-on-premises   If this is the case and if: * you are entering the IP & credentials correct * and there is no connectivity issue.   Then it is most likely SSL certificate validation issue. And your error also suggests the same thing.   You can follow the document to fix it - https://help.splunk.com/en/splunk-soar/soar-on-premises/administer-soar-on-premises/6.4.1/manage-splunk-soar-on-premises-certificate-store/update-or-renew-ssl-certificates-for-nginx-rabbitmq-or-consul#f311597b_e8dd_40f2_9844_a62f90ffc64c__Updating_the_SSL_certificates * Please kindly understand SSL certificates well before you apply this on Production to avoid any issues.   I hope this helps!!! Kindly upvote if it does!!!
@L_Petch Can you make sure your service name filter is not having case sensitivity issue. Try, #!/bin/bash # Check if the container named 'sc4s' is running if ! podman ps --filter "name=sc4s" --fi... See more...
@L_Petch Can you make sure your service name filter is not having case sensitivity issue. Try, #!/bin/bash # Check if the container named 'sc4s' is running if ! podman ps --filter "name=sc4s" --filter "status=running" | grep sc4s > /dev/null 2>&1; then exit 1 fi # Optionally, check if the service inside the container is healthy (e.g., port 514) if ! ss -ltn | grep ':514 ' > /dev/null 2>&1; then exit 1 fi exit 0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid    Apologies I got side tracked by another project and just coming back to this. I am stuck and cannot get this check script to work. I have even tried a simpler "if else " statement... See more...
Hi @livehybrid    Apologies I got side tracked by another project and just coming back to this. I am stuck and cannot get this check script to work. I have even tried a simpler "if else " statement but don't seem to get the desired affect. If I change your script to echo rather than exit I get the echo reply I am expecting if the sc4s container state is started or stopped but as soon as it is using exit statements it always exits 1. Below is what I have done, hoping you can spot my mistake. ----------sc4s_health_check_script.sh---------- Only change is making sc4s upper case # Example keepalived health check script #!/bin/bash # Check if SC4S container is running podman ps --filter "name=SC4S" --filter "status=running" | grep SC4S > /dev/null 2>&1 || exit 1 # Check if syslog port (e.g., 514) is listening ss -ltn | grep ':514 ' > /dev/null 2>&1 || exit 1 exit 0   ----------Keepalived.conf---------- vrrp_script sc4s_health_check { script "/etc/keepalived/sc4s_health_check_script.sh" interval 2 weight -20 } track_script { sc4s_health_check }   This is what I get from keepalived and no matter what I change it always returns 1  Keepalived_vrrp[11414]: VRRP_Script(sc4s_health_check) succeeded Keepalived_vrrp[11414]: Script `sc4s_health_check` now returning 1 Keepalived_vrrp[11414]: VRRP_Script(sc4s_health_check) failed (exited with status 1) keepalived_vrrp[11414]: (VI_1) Changing effective priority from 254 to 234  
Thank you very much for the information. Currently, we are forwarding logs using the Universal Forwarder ( inputs.conf: start_from = oldest checkpointInterval = 5 evt_resolve_ad_obj = 1 wec_event... See more...
Thank you very much for the information. Currently, we are forwarding logs using the Universal Forwarder ( inputs.conf: start_from = oldest checkpointInterval = 5 evt_resolve_ad_obj = 1 wec_event_format = raw_event and the settings Splunk Indexer: props.conf:  SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = EventTime= TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 25 We will plan to test the TA Windows.
Probably the easiest thing would be to switch from "traditional" format to XML logging. Now you're gathering a plain-text representation of windows events. This format has some limitations (one you'v... See more...
Probably the easiest thing would be to switch from "traditional" format to XML logging. Now you're gathering a plain-text representation of windows events. This format has some limitations (one you've just hit) and on average consumes more license than the XML-formatted events. So you might simply want to switch to renderXml=true sourcetype=XmlWinEventLog in your eventlog inputs. Provided that you're using TA-windows, the rest should happen automagically. This will of course only affect events ingested after the change.
Just wtiting "solved" is not very helpful to others. Please provide us with more details - what was the cause of the problem and how you fixed it so that others can benefit from your experience in th... See more...
Just wtiting "solved" is not very helpful to others. Please provide us with more details - what was the cause of the problem and how you fixed it so that others can benefit from your experience in the future. Remember that Answers is about colaboration - sometimes others help you, sometimes you help them. Everybody wins.
thanks
@Alan_Chan  Is your ES IP added to your SOAR allow list? Check connection before pairing - Confirm that Enterprise Security can initiate a TCP connection for REST calls to the SOAR port Also error... See more...
@Alan_Chan  Is your ES IP added to your SOAR allow list? Check connection before pairing - Confirm that Enterprise Security can initiate a TCP connection for REST calls to the SOAR port Also error highlights SSL handshake failure-Are you using self signed or valid CA certificate in the SOAR? Also note that, Splunk Enterprise Security requires a valid SSL certificate to communicate with Splunk SOAR Ref:#https://help.splunk.com/en/splunk-enterprise-security-8/administer/8.0/configuration-and-settings/pair-splunk-enterprise-security-with-splunk-soar   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
solved.  
@uagraw01 how you change the port ? i mean when i reconf in /splunk/etc/system/local/server.conf with port = 8192 under stanza [kvstore] then the kvstore cant be enable anymore (status = failed) 
I Install UF 9.1.7 ARM file on Rocky 9. and i got an Error "tcp_conn_open_afux ossocket_connect failed with No such file or directory" when set deploy-poll   is this Compatibility problem?
Splunk ES using 8000 port while SOAR using 8443 port
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give ... See more...
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give me a single data point, which is what I want. "persistent" is false and "immediate" is true. I'm reusing the SignalFlowClient object but closing the computation after reading the results. If I run the client in a loop with a 60 second delay between iterations, I get frequent but unpredictable http 400 bad request responses.  It appears the first request always succeeds.  There is no further info about what's bad.  Output looks like this: com.signalfx.signalflow.client.SignalFlowException: 400: failed post [ POST https://stream.us0.signalfx.com:443/v2/signalflow/execute?start=1750889822602&stop=1750889822602&persistent=false&immediate=true&timezone=America%2FChicago HTTP/1.1 ] reason: Bad Request at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportConnection.post(ServerSentEventsTransport.java:338) at com.signalfx.signalflow.client.ServerSentEventsTransport.execute(ServerSentEventsTransport.java:106) at com.signalfx.signalflow.client.Computation.execute(Computation.java:185) at com.signalfx.signalflow.client.Computation.<init>(Computation.java:67) at com.signalfx.signalflow.client.SignalFlowClient.execute(SignalFlowClient.java:145)  How can I troubleshoot this further?  I can't find much useful info about how the client is supposed to work. thanks  
The general idea is OK but there are details which can pop up unexpectedly here and there. 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing S... See more...
The general idea is OK but there are details which can pop up unexpectedly here and there. 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk the same way it was installed before. 2. Remember to shut down Splunk service before moving the data. And of course don't start the new instance before you copy the data. 3. I'm not sure why you want to snapshot the volumes. For backup in case you need to roll back? 4. You might have other dependencies lying around, not included in $SPLUNK_HOME - for example certificates. 5. If you move whole filesystems between server instances the UIDs and GIDs might not match and you might need to fix your accesses. Oh, and most importantly - I didn't notice that at first - DON'T UPGRADE AND MOVE AT THE SAME TIME! Either upgrade and then do the move to the same version on a new server or move to the same 8.x you have now and then upgrade on the new server.
Or...if you don't want to do a props you can use an eval statement inline: | eval Source=mvindex('AccountName',0) | eval Destination=mvindex('AccountName',1) Replace Source/Destination with whatev... See more...
Or...if you don't want to do a props you can use an eval statement inline: | eval Source=mvindex('AccountName',0) | eval Destination=mvindex('AccountName',1) Replace Source/Destination with whatever you want the new name to be and AccountName with whatever the original names are. Also, if you have the proper TA (Splunk Add-on for Microsoft Windows | Splunkbase) installed in all the places it should be, these evaluations (via props) should already be happening as src_* and dest_* as per CIM normalization (Overview of the Splunk Common Information Model | Splunk Docs)
You should not modify the original raw event, just create a props to do searchtime representational changes to the data.   The only time you should modify the raw data is if you are masking it to pr... See more...
You should not modify the original raw event, just create a props to do searchtime representational changes to the data.   The only time you should modify the raw data is if you are masking it to protect sensitive information such as credit card or identification numbers like social security here in the US. Modifying raw log data before sending it to Splunk destroys forensic integrity, making it impossible to validate original events during investigations. It also breaks source consistency, impacting troubleshooting, compliance, and accurate analytics.
LOL here's ChatGPT, looks like PowerAutomate does have an HTTP post...IDK if it can read a full file, but it probably can