All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driv... See more...
Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driven repetition</label> <init> <set token="trace"/> </init> <fieldset submitButton="false"> <input type="dropdown" token="limit"> <label>Loop count</label> <choice value="0">0</choice> <default>0</default> <initialValue>0</initialValue> <fieldForLabel>count</fieldForLabel> <fieldForValue>count</fieldForValue> <search> <query>| makeresults count=5 | streamstats count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <eval token="current">if($value$&gt;0,$value$,null())</eval> <set token="trace"/> </change> </input> </fieldset> <row> <panel> <html> $trace$ </html> </panel> </row> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval counter=$current$</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 0"> <eval token="trace">if($result.counter$&gt;0,$trace$." ".$result.counter$,$trace$)</eval> <eval token="current">$result.counter$-1</eval> </condition> <condition match="$current$=0"> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> The idea being that the input (in this case, but you could use a row count from your initial field list) is used to limit the number of times the "loop" is executed. The panel executes a search and reduces the counter by one. There is a panel which essentially shows a trace to show that the search has been executed. Updated due to the way the null() function now operates with respect to unsetting tokens!
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk wo... See more...
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk would cut the data intake, and the panels become unusable. To make sure that data isn't completely cut off, we at our company made an app that keeps track whenever we hit the mark of 3 breaches in a 30-day rolling period. So, upon hitting the mark, the port flip comes into action, and it flips the default receiving port from 9997 to XXXX. Some random alphabet because the indexer discovery will determine the new port as well, once the indexer is restarted. This strategy was initially implemented as a port switch from 9997 to 9998, and the inputs.conf was configured in the usual static way, where I mention the names of the <server>:<port>  format, but later reformatted to suit the indexer discovery technique. What was strange about this technique was that we never had network issues in the search head during the classic forwarding technique, but noticed the same in the indexer discovery technique.  Also, to make sure the problem exists only after the indexer discovery, I simulated the same in a test environment and noticed the worse network usage when the indexers are not reachable, but still the Search head was usable. The only difference between the two environments is that production has a lot of incoming data to the indexers, and the SH also acts as the license master to a lot of other sources where whereas the test environment doesn't do the same. The data flow begins again as we switch the ports back to 9997 after midnight, once the new day license period starts and the SH is back to its normal state.
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "a... See more...
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "appdynamics-native"    "appdynamics-protobuf" Pl let us know resolution for this issue as our application will not support lower version of appdynamics.    Thanks  
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox res... See more...
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox restrictions around what you can do.
App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterpr... See more...
App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterprises.
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice... See more...
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice, though, the user can in fact just add /edit or /editxml to the dashboard if they have rights to edit it, so it's only at the UI layer.  
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capabi... See more...
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capability but the export button is still visible and usable on the application dashboard. Is there any ohter ways to disable them?
You mean you uploaded the app OK, but your JS is throwing errors or the AppInspect process failed and you could not upload the app?
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post... See more...
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post which points to some other post based on your environment.   https://community.splunk.com/t5/Deployment-Architecture/Splunk-Migration-from-existing-server-to-a-new-server/m-p/681655/highlight/true#M28001 If/when you could do a new host which you can use for some testing this shouldn’t be an issue. Just test it with test systems with instructions from those above posts. When you have check and approved those test then just do real migration. I’m not 100% sure that there is not any issues with amz2023 version. I have some feelings that there could be something which need to configure separately e.g. cgroups or something else? You probably find more details from https://splunkcommunity.slack.com/archives/C03M9ENE6AD
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port... See more...
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port and node ind discovery update this information to its list. Then when someone ask it then it told it to those. Based on that UFs just update their targets based on that. If/when you have FW between your source and IDX then those will block connections and their cannot send events anymore to IDXes.  But if you have UFs configured to use static host+port combination then those try to send continuously to those. If your SHs and other infra nodes are using indexer discovery then those are starting to use those new ports. Of course if there is not FW openings between those nodes and IDXes then traffic stops and when queues get full then probably other issues arise. You should check that those “flipped” ports are open between SHCs and IDXes and then your environment should works as expected.  Then is this the best way to avoid license overuse is another story!
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood f... See more...
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood from your earlier comments, you have your index volume(s) outside /opt/splunk. You need to take care of its ownership as well.  
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an... See more...
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an event that includes field=value_1, I would like to remove rows from a lookup that have field=value_1 2. If there is an event that includes field=value_2, I would like to add a row to another lookup. Here is how I create my sample data: | makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+$field$+"\" | outputlookup test_2.csv") The easiest way I thought of was adding | map search="$spl$" But Splunk seems to put quotes around the value. Avoiding that with the approach described here (https://community.splunk.com/t5/Installation/How-do-you-interpret-string-variable-as-SPL-in-Map-function/m-p/385353) does not work, because I can not use the search command this way. Do you have ideas how to achieve my goal?
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk... See more...
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk the same way it was installed before. Yes, Amazon Linux natively supports RPM package installer 2. Remember to shut down Splunk service before moving the data. And of course don't start the new instance before you copy the data. Got it. 3. I'm not sure why you want to snapshot the volumes. For backup in case you need to roll back? Yes, correct..in case there is a need to rollback 4. You might have other dependencies lying around, not included in $SPLUNK_HOME - for example certificates. In our case, the ssl certificates are deployed under /opt/splunk/etc/certs/ as the ssl offloading is directly on the server and there is no loadbalancer or proxy in the front.  Can you think of anything else that may  deployed outside of /opt/splunk 5. If you move whole filesystems between server instances the UIDs and GIDs might not match and you might need to fix your accesses. Can we recursively chown the files on the new server after migration to ensure correct ownership, hope that should take care of it sudo chown -R splunk:splunk /opt/splunk Oh, and most importantly - I didn't notice that at first - DON'T UPGRADE AND MOVE AT THE SAME TIME! Either upgrade and then do the move to the same version on a new server or move to the same 8.x you have now and then upgrade on the new server. Sure I prefer doing the latter, but the older version of Splunk Enterprise 8.2.2.1 does not support Amazon Linux.
Talked to my sysadmin, we decided to use port 1035 instead of port 514. not getting the socket errors in splunkd.log anymore, but still not seeing the messages from the UF in Splunk Cloud.   root@NH... See more...
Talked to my sysadmin, we decided to use port 1035 instead of port 514. not getting the socket errors in splunkd.log anymore, but still not seeing the messages from the UF in Splunk Cloud.   root@NHC-NETSplunkForwarder:/opt/splunkforwarder/var/log/splunk# cat splunkd.log | grep "1035" 06-26-2025 20:05:00.017 +0000 INFO TcpInputConfig [1851 TcpListener] - IPv4 port 1035 is reserved for raw input 06-26-2025 20:05:00.017 +0000 INFO TcpInputConfig [1851 TcpListener] - IPv4 port 1035 will negotiate s2s protocol level 7 06-26-2025 20:05:00.017 +0000 INFO TcpInputProc [1851 TcpListener] - Creating raw Acceptor for IPv4 port 1035 with Non-SSL 06-26-2025 20:25:30.471 +0000 WARN AutoLoadBalancedConnectionStrategy [1869 TcpOutEloop] - Possible duplication of events with channel=source::udp:1035|host::10.12.2.149|NETWORK|, streamId=1989559377486376685, offset=6 on host=3.213.185.213:9997 connid 0
Yes, that looks like a viable approach. Thank you. Too bad Power Automate is tricky and I'm not a programmer. I'll leave this discussion open for a few in case anyone has already achieved the goal an... See more...
Yes, that looks like a viable approach. Thank you. Too bad Power Automate is tricky and I'm not a programmer. I'll leave this discussion open for a few in case anyone has already achieved the goal and wants to share.
setcap 'cap_net_bind_service=+ep' /opt/splunkforwarder/bin/splunk I just tried this, still seeing the same issue. I also had my system admin move user splunkfwd (this user runs splunk) into the sud... See more...
setcap 'cap_net_bind_service=+ep' /opt/splunkforwarder/bin/splunk I just tried this, still seeing the same issue. I also had my system admin move user splunkfwd (this user runs splunk) into the sudo group  still seeing the same errors in splunkd.log 06-26-2025 18:46:46.515 +0000 INFO TcpInputConfig [921 TcpListener] - IPv4 port 514 is reserved for raw input 06-26-2025 18:46:46.515 +0000 INFO TcpInputConfig [921 TcpListener] - IPv4 port 514 will negotiate s2s protocol level 7 06-26-2025 18:46:46.515 +0000 ERROR TcpInputProc [921 TcpListener] - Could not bind to port IPv4 port 514: Permission denied 06-26-2025 19:27:32.285 +0000 INFO TcpInputConfig [1554 TcpListener] - IPv4 port 514 is reserved for raw input 06-26-2025 19:27:32.286 +0000 INFO TcpInputConfig [1554 TcpListener] - IPv4 port 514 will negotiate s2s protocol level 7 06-26-2025 19:27:32.286 +0000 ERROR TcpInputProc [1554 TcpListener] - Could not bind to port IPv4 port 514: Permission denied  
To allow the UF access to port 514, try this setcap 'cap_net_bind_service=+ep' /path/to/uf
This is... bad, Firstly, it seems that it's data already received by something else, embedded in another format and sent to Splunk. Then secondly, these are completely different sourcetypes. So if ... See more...
This is... bad, Firstly, it seems that it's data already received by something else, embedded in another format and sent to Splunk. Then secondly, these are completely different sourcetypes. So if you absolutely cannot separate them earlier, you should overwrite sourcetype on ingestion so that each of those types is parsed differently.
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description ... See more...
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description fields that are all blank. Only one playbook has this option and all were created from scratch. What caused this output dropdown on the one playbook? The playbook type was created as automation and not input.
Please provide more examples of the events you are dealing with, and include your desired results, and what you are getting (and why it is not correct)?