All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterpr... See more...
App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterprises.
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice... See more...
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice, though, the user can in fact just add /edit or /editxml to the dashboard if they have rights to edit it, so it's only at the UI layer.  
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capabi... See more...
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capability but the export button is still visible and usable on the application dashboard. Is there any ohter ways to disable them?
You mean you uploaded the app OK, but your JS is throwing errors or the AppInspect process failed and you could not upload the app?
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post... See more...
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post which points to some other post based on your environment.   https://community.splunk.com/t5/Deployment-Architecture/Splunk-Migration-from-existing-server-to-a-new-server/m-p/681655/highlight/true#M28001 If/when you could do a new host which you can use for some testing this shouldn’t be an issue. Just test it with test systems with instructions from those above posts. When you have check and approved those test then just do real migration. I’m not 100% sure that there is not any issues with amz2023 version. I have some feelings that there could be something which need to configure separately e.g. cgroups or something else? You probably find more details from https://splunkcommunity.slack.com/archives/C03M9ENE6AD
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port... See more...
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port and node ind discovery update this information to its list. Then when someone ask it then it told it to those. Based on that UFs just update their targets based on that. If/when you have FW between your source and IDX then those will block connections and their cannot send events anymore to IDXes.  But if you have UFs configured to use static host+port combination then those try to send continuously to those. If your SHs and other infra nodes are using indexer discovery then those are starting to use those new ports. Of course if there is not FW openings between those nodes and IDXes then traffic stops and when queues get full then probably other issues arise. You should check that those “flipped” ports are open between SHCs and IDXes and then your environment should works as expected.  Then is this the best way to avoid license overuse is another story!
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood f... See more...
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood from your earlier comments, you have your index volume(s) outside /opt/splunk. You need to take care of its ownership as well.  
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an... See more...
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an event that includes field=value_1, I would like to remove rows from a lookup that have field=value_1 2. If there is an event that includes field=value_2, I would like to add a row to another lookup. Here is how I create my sample data: | makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+$field$+"\" | outputlookup test_2.csv") The easiest way I thought of was adding | map search="$spl$" But Splunk seems to put quotes around the value. Avoiding that with the approach described here (https://community.splunk.com/t5/Installation/How-do-you-interpret-string-variable-as-SPL-in-Map-function/m-p/385353) does not work, because I can not use the search command this way. Do you have ideas how to achieve my goal?
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk... See more...
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk the same way it was installed before. Yes, Amazon Linux natively supports RPM package installer 2. Remember to shut down Splunk service before moving the data. And of course don't start the new instance before you copy the data. Got it. 3. I'm not sure why you want to snapshot the volumes. For backup in case you need to roll back? Yes, correct..in case there is a need to rollback 4. You might have other dependencies lying around, not included in $SPLUNK_HOME - for example certificates. In our case, the ssl certificates are deployed under /opt/splunk/etc/certs/ as the ssl offloading is directly on the server and there is no loadbalancer or proxy in the front.  Can you think of anything else that may  deployed outside of /opt/splunk 5. If you move whole filesystems between server instances the UIDs and GIDs might not match and you might need to fix your accesses. Can we recursively chown the files on the new server after migration to ensure correct ownership, hope that should take care of it sudo chown -R splunk:splunk /opt/splunk Oh, and most importantly - I didn't notice that at first - DON'T UPGRADE AND MOVE AT THE SAME TIME! Either upgrade and then do the move to the same version on a new server or move to the same 8.x you have now and then upgrade on the new server. Sure I prefer doing the latter, but the older version of Splunk Enterprise 8.2.2.1 does not support Amazon Linux.
Talked to my sysadmin, we decided to use port 1035 instead of port 514. not getting the socket errors in splunkd.log anymore, but still not seeing the messages from the UF in Splunk Cloud.   root@NH... See more...
Talked to my sysadmin, we decided to use port 1035 instead of port 514. not getting the socket errors in splunkd.log anymore, but still not seeing the messages from the UF in Splunk Cloud.   root@NHC-NETSplunkForwarder:/opt/splunkforwarder/var/log/splunk# cat splunkd.log | grep "1035" 06-26-2025 20:05:00.017 +0000 INFO TcpInputConfig [1851 TcpListener] - IPv4 port 1035 is reserved for raw input 06-26-2025 20:05:00.017 +0000 INFO TcpInputConfig [1851 TcpListener] - IPv4 port 1035 will negotiate s2s protocol level 7 06-26-2025 20:05:00.017 +0000 INFO TcpInputProc [1851 TcpListener] - Creating raw Acceptor for IPv4 port 1035 with Non-SSL 06-26-2025 20:25:30.471 +0000 WARN AutoLoadBalancedConnectionStrategy [1869 TcpOutEloop] - Possible duplication of events with channel=source::udp:1035|host::10.12.2.149|NETWORK|, streamId=1989559377486376685, offset=6 on host=3.213.185.213:9997 connid 0
Yes, that looks like a viable approach. Thank you. Too bad Power Automate is tricky and I'm not a programmer. I'll leave this discussion open for a few in case anyone has already achieved the goal an... See more...
Yes, that looks like a viable approach. Thank you. Too bad Power Automate is tricky and I'm not a programmer. I'll leave this discussion open for a few in case anyone has already achieved the goal and wants to share.
setcap 'cap_net_bind_service=+ep' /opt/splunkforwarder/bin/splunk I just tried this, still seeing the same issue. I also had my system admin move user splunkfwd (this user runs splunk) into the sud... See more...
setcap 'cap_net_bind_service=+ep' /opt/splunkforwarder/bin/splunk I just tried this, still seeing the same issue. I also had my system admin move user splunkfwd (this user runs splunk) into the sudo group  still seeing the same errors in splunkd.log 06-26-2025 18:46:46.515 +0000 INFO TcpInputConfig [921 TcpListener] - IPv4 port 514 is reserved for raw input 06-26-2025 18:46:46.515 +0000 INFO TcpInputConfig [921 TcpListener] - IPv4 port 514 will negotiate s2s protocol level 7 06-26-2025 18:46:46.515 +0000 ERROR TcpInputProc [921 TcpListener] - Could not bind to port IPv4 port 514: Permission denied 06-26-2025 19:27:32.285 +0000 INFO TcpInputConfig [1554 TcpListener] - IPv4 port 514 is reserved for raw input 06-26-2025 19:27:32.286 +0000 INFO TcpInputConfig [1554 TcpListener] - IPv4 port 514 will negotiate s2s protocol level 7 06-26-2025 19:27:32.286 +0000 ERROR TcpInputProc [1554 TcpListener] - Could not bind to port IPv4 port 514: Permission denied  
To allow the UF access to port 514, try this setcap 'cap_net_bind_service=+ep' /path/to/uf
This is... bad, Firstly, it seems that it's data already received by something else, embedded in another format and sent to Splunk. Then secondly, these are completely different sourcetypes. So if ... See more...
This is... bad, Firstly, it seems that it's data already received by something else, embedded in another format and sent to Splunk. Then secondly, these are completely different sourcetypes. So if you absolutely cannot separate them earlier, you should overwrite sourcetype on ingestion so that each of those types is parsed differently.
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description ... See more...
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description fields that are all blank. Only one playbook has this option and all were created from scratch. What caused this output dropdown on the one playbook? The playbook type was created as automation and not input.
Please provide more examples of the events you are dealing with, and include your desired results, and what you are getting (and why it is not correct)?
And, to add to already provided answers, there is no such thing as syslog meaning a strictly defined protocol. Syslog can mean many different things depending on context and it's definitely not limit... See more...
And, to add to already provided answers, there is no such thing as syslog meaning a strictly defined protocol. Syslog can mean many different things depending on context and it's definitely not limited to 514 port. It's a perfectly normal situation when "syslog" data is sent to another port.
Heavy forwarder with httpout to indexer cluster - Splunk Community httpout is not a HEC output (although it needs an HEC input and valid HEC token; it's complicated). It's s2s protocol embedded ... See more...
Heavy forwarder with httpout to indexer cluster - Splunk Community httpout is not a HEC output (although it needs an HEC input and valid HEC token; it's complicated). It's s2s protocol embedded in http transport. It is indeed a fairly recent invention mostly aimed at situations like yours - where it's easier (politically, not technically) to allow outgoing http traffic (even if it's only pseudo-http) than some unknown protocol. Maybe, this is the correct explanation.
We will be installing Splunk Connect 4 Syslog soon. But I haven't got there yet. That will be more involved. We previously tried running syslog-ng on the server and monitoring the file, but everythi... See more...
We will be installing Splunk Connect 4 Syslog soon. But I haven't got there yet. That will be more involved. We previously tried running syslog-ng on the server and monitoring the file, but everything came into splunk cloud from the same host in Splunk Cloud. It was a mess. When I installed the Universal Forwarder on the new servers, I created new user splunkfwd to run it, just like the instructions said. Can I simply change the permissions for user splunkfwd? At this point I don't really care if it runs with root privileges. what would the needed permissions for user splunkfwd to overcome this? Thanks, -Pete 
Let me clarify terms and be more specific: S2S+TLS = Splunk to Splunk Protocol with TLS Encryption HTTPS = HTTP Protocol with TLS Encryption I would like to use the HTTP protocol with TLS to send ... See more...
Let me clarify terms and be more specific: S2S+TLS = Splunk to Splunk Protocol with TLS Encryption HTTPS = HTTP Protocol with TLS Encryption I would like to use the HTTP protocol with TLS to send data from a Heavy Forwarder to a HTTP Event Collector (HEC). There are configuration options in the outputs.conf spec for doing this. This post also says something similar: How to send data to two output types, [tcpout] and... - Splunk Community "It also states httpout is only supported on UFs but it works on HFs as well. I've tested with both httpout and tcpout but httpout will take precedence every-time." From everything I can tell, it never works.  It doesn't even make an attempt to connect to the HEC (verified via packet capture).