All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great thank you for the quick answer and the information ! Lucas
Hi @tafadzwad  Please could you run a btool to check the configuration: $SPLUNK_HOME/bin/splunk cmd btool inputs list --debug snow://change_request I assume the table=change_request but worth chec... See more...
Hi @tafadzwad  Please could you run a btool to check the configuration: $SPLUNK_HOME/bin/splunk cmd btool inputs list --debug snow://change_request I assume the table=change_request but worth checking!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, We’re currently facing a load imbalance issue in our Splunk deployment and would appreciate any advice or best practices. Current Setup: Universal Forwarders (UFs) → Heavy Forwarders (HFs) → C... See more...
Hi, We’re currently facing a load imbalance issue in our Splunk deployment and would appreciate any advice or best practices. Current Setup: Universal Forwarders (UFs) → Heavy Forwarders (HFs) → Cribl We originally had 8 HFs handling parsing and forwarding. Recently, we added 6 new HFs (total of 14 HFs) to help distribute the load more evenly and to offload congested older HFs. All HFs are included in the UFs’ outputs.conf under the same TCP output group. Issue:  We’re seeing that some of the original 8 HFs are still showingblocked=true in metrics.log (splunktcpin queue full), while the newly added HFs have little to no traffic. It looks like the load is not being evenly distributed across the available HFs. Here's our current outputs.conf deployed in UFs: [tcpout] defaultGroup = HF_Group forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:HF_Group] server = HF1:9997,HF2:9997,...HF14:9997 We have not set autoLBFrequency yet. Questions: Do we need to set autoLBFrequency in order to achieve true active load balancing across all 14 HFs, even when none of them are failing? If we set autoLBFrequency = 30, are there any potential downsides (e.g., performance impact, TCP session churn)? Are there better or recommended approaches to ensure even distribution of UF traffic to multiple HFs in environments before forwarding to Cribl? Please note that we are sending a large volume of data, primarily wineventlogs. Your help is very much appreciated. Thank you
Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log o... See more...
Hello, I have problem with Analyst queue: I am not able to add column to Analyst Queue in GUI. When I do this (using the cogwheel icon on Analyst queue dashboard), column is added, but when I log off and log on again, previously added column disseapers and Analyst queue is in default setting again. I expected that new config of Analyst Queue will be saved in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf, but I found that this file remains untouched when I add new column. Is there any way how to add new column to the AQ permanently? Now I am aware only about one way - manually edit $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local/log_review.conf file (this works), but this is not usable for analysts who would like to customize AQ GUI and have not privilege to edit config files. Environment is Search Head cluster (3 nodes) with Splunk Enterprise 9.3.0 and Enterprise Security 8.1.0. Any hint would be highly appreciated. Best regards Lukas Mecir
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = s... See more...
Hi Splunk Community I am  using the Splunk Add-on for ServiceNow v8.0.0 in a Splunk Cloud environment and have correctly configured a custom input for the change_request table with: timefield = sys_updated_on filter_parameters = sysparm_query=ORDERBYDESCsys_updated_on&sysparm_limit=100 Despite this, logs in _internal consistently show the add-on attempting to use: change_request.change_request.sys_updated_on Example from splunkd_access.log: 200 GET /change_request.change_request.sys_updated_on This causes data not to be ingested, despite 200 HTTP responses. The correct behavior should use change_request as the table and include sys_updated_on in the query string, not the URL path. Request: Please confirm if its a know issue and if there is a workaround for this? Thank you. Thank you.
Hi @CAED-form  This is actually a bug in a number of versions from what I can see. From what I can see this isnt listed in the known issues so unsure if there is already an SPL bug reference raised ... See more...
Hi @CAED-form  This is actually a bug in a number of versions from what I can see. From what I can see this isnt listed in the known issues so unsure if there is already an SPL bug reference raised for this. I would recommend contacting support, ask them if this is already raised as a bug and that they raise it if not, they should hopefully give you a reference starting "SPL-" which can be used to track it against future releases "fixed issues" list.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @lux209  It is not required to use mTLS if you do not want to (I dont usually have mTLS on client->DS comms as for using SSL is enough for me).   Set in the server.conf set [sslConfig]/requireCl... See more...
Hi @lux209  It is not required to use mTLS if you do not want to (I dont usually have mTLS on client->DS comms as for using SSL is enough for me).   Set in the server.conf set [sslConfig]/requireClientCert stanza to false (which I think is the default). For me the reason you might want to use clientCert on a DS is if you wanted to ensure that no other hosts could connect to your DS and receive its configuration, which may contain sensitive credentials/configurations (e.g. certs to send to your indexers). If this is low risk for you (e.g. not publicly accessible) then it sounds like having requireClientCert to false would suffice.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Since when is the SAL a CSV file? It is a perverted UTF16 fixed record monstrosity. Please read my old post on splunking the SAP log that the OP referenced to understand what is going on.
Hi Splaur, me thinks your EXTRACT-fields is not needed, that action is performed in the transforms.conf file via REPORT-SAP-Delim which refers to the line seperators generated via add_separators. P... See more...
Hi Splaur, me thinks your EXTRACT-fields is not needed, that action is performed in the transforms.conf file via REPORT-SAP-Delim which refers to the line seperators generated via add_separators. Please reread the example and stick to it also in all the names until it works. That should get you going. 
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclien... See more...
Hello, I'm looking to secure the connection to our deployment server using HTTPS following this doc: https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Securingyourdeploymentserverandclients I'm wondering if having client certificate is mandatory or if it would be possible to only install a certificate on the DS server itself ? I don't need the to have mTLS, my goal is only to have an encrypted connection between the server and the clients. Thanks for you help Lucas
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows)... See more...
Hello (excuse me for my english it's not my native language) I upgrade splunk entreprise from 9.2.1 to 9.3.4 Everything went right, each server was upgraded without any problem (linux and windows). But when i click on Settings>Users and authentication>Roles  (same for users) , the panel with the list of users is empty I can see only the top menu of splunk, the remaining of the page is blank. This occurs when the language is set to French  (fr-FR) , in english (en-US) it's ok. Can somebody help me ? Where can i find the language pack, to see if something's missing? thank you
Hi @jw220635  There were a bunch of bugs fixed in 9.4.2 relating to Deployment Server (https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/release-notes/9.4/fixed-issues/fixed-iss... See more...
Hi @jw220635  There were a bunch of bugs fixed in 9.4.2 relating to Deployment Server (https://help.splunk.com/en/splunk-enterprise/release-notes-and-updates/release-notes/9.4/fixed-issues/fixed-issues/splunk-enterprise-9.4.2-fixed-issues#:~:text=Distributed%20deployment%2C%20forwarder%2C%20deployment%20server%20issues) regarding the UI not loading correctly so I would recommend upgrading to the latest 9.4.x version to see if this resolves your issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tbarn005  Can you confirm which pass4SymmKey you have verified is the same across the SH and HFs? The pass4SymmKey under deployment stanza in server.conf matches between deployment server and h... See more...
Hi @tbarn005  Can you confirm which pass4SymmKey you have verified is the same across the SH and HFs? The pass4SymmKey under deployment stanza in server.conf matches between deployment server and heavy forwarder is used for the Ingest Action preview and I believe this cannot be a default value.  For more info and diagnostic/troubleshooting check out https://splunk.my.site.com/customer/s/article/Ingest-Actions-are-not-working  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Chrome on iPad has the same issues, tested yesterday. My teammate just now tried using it via VPN from his home office. And he couldn't recreate the issue! So I start to believe it's the company's W... See more...
Chrome on iPad has the same issues, tested yesterday. My teammate just now tried using it via VPN from his home office. And he couldn't recreate the issue! So I start to believe it's the company's WiFis fault. Will check with our network guys.
Don't touch app's default directory! It's not supposed to be edited and will get overwritten after next update.
1. Addon for *nix does contain some questionable items. They are good for demonstrating functionality but not necessarily for production use. I definitely wouldn't just bulk ingest everything under /... See more...
1. Addon for *nix does contain some questionable items. They are good for demonstrating functionality but not necessarily for production use. I definitely wouldn't just bulk ingest everything under /var/log with one sourcetype. You mightjJust disable the "global" /var/log montor stanza and ingest each needed file/dir with its own sourcetype. It's easier than pulling all files at once and overwriting the sourcetype later (like @PrewinThomas showed). But you can do it using overriding settings on input. 2. In case of the sourcetype assignment, you can override the sourcetype set at input level by defining a props.conf source:: stanza with a sourcetype assignment. If you create an entry in inputs.conf [monitor:///var/log] sourcetype=s1 but add to props.conf [source::.../var/log/apache/access.log] sourcetype=s2 All files from /var/log will be ingested with sourcetype of s1 except for the access.log which will have a sourcetype s2. 3. Overriding in that quote you posted means that if there are multiple "the same" config items coming from different stanzas or config files one of those has precedence over anotner and only one of them will be used. For example, if you have a general sourcetype setting [s2] LINE_BREAKER=(\r\n)+ and a source-specific one [source::.../var/log/apache/access.log] LINE_BREAKER=(##) For any s2-sourcetyped file line breaker will be assigned according to the general sourcetype rule except for the access.log, which will be line broken according to the source-specified setting which takes precedence.
@haph  Can you try alternative browser(chrome/firefox) on your ipad and see how it works. This helps isolate whether Safari on iPad is the bottleneck. Regards, Prewin Splunk Enthusiast | Alwa... See more...
@haph  Can you try alternative browser(chrome/firefox) on your ipad and see how it works. This helps isolate whether Safari on iPad is the bottleneck. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
@BradOH  Can you place/append your commands.conf file in the app’s default directory, not local, and not in system/local. Now restart and check if it's taking or not. Regards, Prewin Splunk En... See more...
@BradOH  Can you place/append your commands.conf file in the app’s default directory, not local, and not in system/local. Now restart and check if it's taking or not. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @Alan_Chan  Follow these quick steps: DNS Resolution: Run nslookup <eventhub-namespace>.servicebus.windows.net on the Heavy Forwarder/Search Head to verify DNS resolution. Connectivity ... See more...
Hello @Alan_Chan  Follow these quick steps: DNS Resolution: Run nslookup <eventhub-namespace>.servicebus.windows.net on the Heavy Forwarder/Search Head to verify DNS resolution. Connectivity Check: Use ping <eventhub-namespace>.servicebus.windows.net to confirm network access. Firewall Rules: Ensure the public IP of the instance isn’t blocked by Azure Event Hub's IP firewall settings. Permission Check: Verify the Event Hub permissions for the data inputs in question. Endpoint Test: Run wget https://<eventhub-namespace>.servicebus.windows.net/ to check endpoint availability. If the namespace can't be resolved, the Event Hub may have been decommissioned. 
@Na_Kang_Lim  The above we highlighted about props and transforms. And for inputs.conf,  Splunk matches the most specific generally [monitor://] stanza for a file. If both stanzas exist, the more ... See more...
@Na_Kang_Lim  The above we highlighted about props and transforms. And for inputs.conf,  Splunk matches the most specific generally [monitor://] stanza for a file. If both stanzas exist, the more specific path (/var/log/abc/def.log) overrides the more general one (/var/log). So answering to your question, def.log will use sourcetype = alphabet_log, even if [monitor:///var/log] has a different sourcetype or settings. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!