All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@RAVISHANKAR  Yes, a Splunk Enterprise Search Head running version 9.4.2 can communicate with Indexers running version 9.2.1. But It's recommended to upgrade all components to the same version to en... See more...
@RAVISHANKAR  Yes, a Splunk Enterprise Search Head running version 9.4.2 can communicate with Indexers running version 9.2.1. But It's recommended to upgrade all components to the same version to ensure full feature compatibility and support. Yes, UF 8.0.5 can still forward data to Splunk Indexers running 9.2.1 or 9.4.2. However, Splunk no longer provides full support for UF 8.0.x. Splunk Software Support Policy | Splunk  About upgrading to 8.0 READ THIS FIRST - Splunk Documentation
@meg Please verify your sourcetype, The Splunk Add-on for Sysmon for Linux supports the following source types: sysmon:linux 
Yes. The range of interoperability between UFs and receiving components (intermediate forwarders/indexers) is quite big. Even if the official documentation doesn't list something as supported, things... See more...
Yes. The range of interoperability between UFs and receiving components (intermediate forwarders/indexers) is quite big. Even if the official documentation doesn't list something as supported, things might just work. I've had UFs as old as 6.6 sending to version 9 indexers and it ran OK. There might be a minor issue with v9 UFs sending to older indexers because new UFs generate config change events which are supposed to go to indexes not present on older Splunk instances. The temporary walkaround for this is to disable the config tracker inputs on the UFs until the indexers are upgraded to v9. But even if you don't do that, they will generally work, it's just that those events will either land in your last chance index or will generate a warning about non-existent index and get dropped completely.
@meg  renderXml = false This setting is typically used in Universal Forwarder or inputs.conf for Windows Event Logs. If you're forwarding Linux logs, this setting might not be relevant unless you'... See more...
@meg  renderXml = false This setting is typically used in Universal Forwarder or inputs.conf for Windows Event Logs. If you're forwarding Linux logs, this setting might not be relevant unless you're using it in a specific context. Have you installed the below add-on to parse the data? Can you share your inputs.conf file here.  https://splunkbase.splunk.com/app/6652  https://docs.splunk.com/Documentation/AddOns/released/NixSysmon/Sourcetypes   
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to s... See more...
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to same version ? Also Splunk UF 8.0.5 will be able to talk to Indexers ? I read it will be able to talk but only we will not have splunk support for this versions and only we will have P3 support if any issues. Thanks
My linux logs cannot parsed in dashboard. My renderxml is setted to false  
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       di... See more...
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       disk:       filesystem:       network:       paging:       processes: exporters:   splunk_hec:     token: ""     endpoint: "https://testsplunk.com:8088"     source: "otelcol"     sourcetype: "_json"     index: "telemetry_test" service:   pipelines:     metrics:       receivers: [hostmetrics]       exporters: [splunk_hec]
Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driv... See more...
Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driven repetition</label> <init> <set token="trace"/> </init> <fieldset submitButton="false"> <input type="dropdown" token="limit"> <label>Loop count</label> <choice value="0">0</choice> <default>0</default> <initialValue>0</initialValue> <fieldForLabel>count</fieldForLabel> <fieldForValue>count</fieldForValue> <search> <query>| makeresults count=5 | streamstats count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <eval token="current">if($value$&gt;0,$value$,null())</eval> <set token="trace"/> </change> </input> </fieldset> <row> <panel> <html> $trace$ </html> </panel> </row> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval counter=$current$</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 0"> <eval token="trace">if($result.counter$&gt;0,$trace$." ".$result.counter$,$trace$)</eval> <eval token="current">$result.counter$-1</eval> </condition> <condition match="$current$=0"> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> The idea being that the input (in this case, but you could use a row count from your initial field list) is used to limit the number of times the "loop" is executed. The panel executes a search and reduces the counter by one. There is a panel which essentially shows a trace to show that the search has been executed. Updated due to the way the null() function now operates with respect to unsetting tokens!
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk wo... See more...
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk would cut the data intake, and the panels become unusable. To make sure that data isn't completely cut off, we at our company made an app that keeps track whenever we hit the mark of 3 breaches in a 30-day rolling period. So, upon hitting the mark, the port flip comes into action, and it flips the default receiving port from 9997 to XXXX. Some random alphabet because the indexer discovery will determine the new port as well, once the indexer is restarted. This strategy was initially implemented as a port switch from 9997 to 9998, and the inputs.conf was configured in the usual static way, where I mention the names of the <server>:<port>  format, but later reformatted to suit the indexer discovery technique. What was strange about this technique was that we never had network issues in the search head during the classic forwarding technique, but noticed the same in the indexer discovery technique.  Also, to make sure the problem exists only after the indexer discovery, I simulated the same in a test environment and noticed the worse network usage when the indexers are not reachable, but still the Search head was usable. The only difference between the two environments is that production has a lot of incoming data to the indexers, and the SH also acts as the license master to a lot of other sources where whereas the test environment doesn't do the same. The data flow begins again as we switch the ports back to 9997 after midnight, once the new day license period starts and the SH is back to its normal state.
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "a... See more...
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "appdynamics-native"    "appdynamics-protobuf" Pl let us know resolution for this issue as our application will not support lower version of appdynamics.    Thanks  
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox res... See more...
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox restrictions around what you can do.
App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterpr... See more...
App was uploaded successfully, when I try to use Javascript in Splunk cloud getting error. Is there any blockage at Splunk end due to security reason, same javascript i can able to use Splunk enterprises.
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice... See more...
In Simple XML dashboards you can display things  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form See the hide* attributes there In practice, though, the user can in fact just add /edit or /editxml to the dashboard if they have rights to edit it, so it's only at the UI layer.  
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capabi... See more...
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capability but the export button is still visible and usable on the application dashboard. Is there any ohter ways to disable them?
You mean you uploaded the app OK, but your JS is throwing errors or the AppInspect process failed and you could not upload the app?
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post... See more...
I don’t believe that you have any issues with those 8.x.x UFs with splunk 9.3.x or even. 9.4.x. Those will work together, maybe some modifications are needed, but probably none. Here is one old post which points to some other post based on your environment.   https://community.splunk.com/t5/Deployment-Architecture/Splunk-Migration-from-existing-server-to-a-new-server/m-p/681655/highlight/true#M28001 If/when you could do a new host which you can use for some testing this shouldn’t be an issue. Just test it with test systems with instructions from those above posts. When you have check and approved those test then just do real migration. I’m not 100% sure that there is not any issues with amz2023 version. I have some feelings that there could be something which need to configure separately e.g. cgroups or something else? You probably find more details from https://splunkcommunity.slack.com/archives/C03M9ENE6AD
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port... See more...
Can you open more how and when you are doing this “flip”? You probably know that when you are over used your license it’s not mater how much you do it? When you are “flip” you indexer receiving port and node ind discovery update this information to its list. Then when someone ask it then it told it to those. Based on that UFs just update their targets based on that. If/when you have FW between your source and IDX then those will block connections and their cannot send events anymore to IDXes.  But if you have UFs configured to use static host+port combination then those try to send continuously to those. If your SHs and other infra nodes are using indexer discovery then those are starting to use those new ports. Of course if there is not FW openings between those nodes and IDXes then traffic stops and when queues get full then probably other issues arise. You should check that those “flipped” ports are open between SHCs and IDXes and then your environment should works as expected.  Then is this the best way to avoid license overuse is another story!
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood f... See more...
4. That was the most obvious example. There might be some other dependencies - for example, if you're using dbconnect, you require JRE. 5. Yes, chowning should take care of it. But as I understood from your earlier comments, you have your index volume(s) outside /opt/splunk. You need to take care of its ownership as well.  
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an... See more...
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an event that includes field=value_1, I would like to remove rows from a lookup that have field=value_1 2. If there is an event that includes field=value_2, I would like to add a row to another lookup. Here is how I create my sample data: | makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+$field$+"\" | outputlookup test_2.csv") The easiest way I thought of was adding | map search="$spl$" But Splunk seems to put quotes around the value. Avoiding that with the approach described here (https://community.splunk.com/t5/Installation/How-do-you-interpret-string-variable-as-SPL-in-Map-function/m-p/385353) does not work, because I can not use the search command this way. Do you have ideas how to achieve my goal?
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk... See more...
Hi @PickleRick, Thank you so much for your help...Please find the comments inline: 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk the same way it was installed before. Yes, Amazon Linux natively supports RPM package installer 2. Remember to shut down Splunk service before moving the data. And of course don't start the new instance before you copy the data. Got it. 3. I'm not sure why you want to snapshot the volumes. For backup in case you need to roll back? Yes, correct..in case there is a need to rollback 4. You might have other dependencies lying around, not included in $SPLUNK_HOME - for example certificates. In our case, the ssl certificates are deployed under /opt/splunk/etc/certs/ as the ssl offloading is directly on the server and there is no loadbalancer or proxy in the front.  Can you think of anything else that may  deployed outside of /opt/splunk 5. If you move whole filesystems between server instances the UIDs and GIDs might not match and you might need to fix your accesses. Can we recursively chown the files on the new server after migration to ensure correct ownership, hope that should take care of it sudo chown -R splunk:splunk /opt/splunk Oh, and most importantly - I didn't notice that at first - DON'T UPGRADE AND MOVE AT THE SAME TIME! Either upgrade and then do the move to the same version on a new server or move to the same 8.x you have now and then upgrade on the new server. Sure I prefer doing the latter, but the older version of Splunk Enterprise 8.2.2.1 does not support Amazon Linux.