All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You must have a license to run ES before you can download it.  A Developer license does not grant access to ES.
I'm having Developer License but I'm unable to download the ES. Can any one help me in this.?
In addition to @kiran_panchavat, all the components support backward communication to n-3 Splunk versions in decreasing order of significance in architecture components. First tier is Management node... See more...
In addition to @kiran_panchavat, all the components support backward communication to n-3 Splunk versions in decreasing order of significance in architecture components. First tier is Management nodes like cluster manager, search head cluster deployer. Next would be components like Search Head, Indexer, and then comes the forwarders. 
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually config... See more...
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually configure ENGINID later I got the option of integrating my Node to SPLUNK ITSI and SC4SNMP whichi I did but intitially they didn't support EnginID auto discovery then I had Manually run the SNMPGET and provided the Engine ID for them.Now I am started sending my trap towards both the nodes ith same OID and ENgine ID.But My alarms are not getting to splunk index even though we will be able it capture it in the port of SC4SNMP.Later I found out that SPLUNkK ITST getting toe Same alarm same oid forwarded from the previous target.But this time target is using SNMPV2 and it sending as a community with a community string with few OIDs bundled together.Can this be the issue where my Nodes origina trap is not reaching the correct index?
To concur with the above answers, you would have to utilize a lookup file that lists out all of the sources you want to monitor. Natively, Splunk does not have a source = 0 events. (it doesn't know ... See more...
To concur with the above answers, you would have to utilize a lookup file that lists out all of the sources you want to monitor. Natively, Splunk does not have a source = 0 events. (it doesn't know what it doesn't know). In the environment we work in, we apply a siar approach but its based on host and whether the sources are coming in or not for our customers. | tstats values(source) as source, values(sourcetype) as sourcetype WHERE index=[index] [ | inputlookup [myHostLookup].csv | fields host ] by host | stats count, values(sourcetype) as sourcetype, values(source) as source by host | eval Reporting=if(isnull(source), "No Matching Sources", "Yes") | table host, Reporting, source, sourcetype --- If this reply helps you, Karma would be appreciated.
Hi @meg  Please can you confirm the sourcetype that you are using? Also, is this being read directly using a UF and sent to Splunk without going via other systems?  Are you ingesting this using th... See more...
Hi @meg  Please can you confirm the sourcetype that you are using? Also, is this being read directly using a UF and sent to Splunk without going via other systems?  Are you ingesting this using the Splunk Add-on for Sysmon for Linux on the UF?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @mbissante  Just a follow up on my previous post, the following are for 9.0.9 which was the last 9.0.x release: -------- Linux -------- -- Tarball (TGZ) wget -O splunk-9.0.9-6315942c563f-Linux-... See more...
Hi @mbissante  Just a follow up on my previous post, the following are for 9.0.9 which was the last 9.0.x release: -------- Linux -------- -- Tarball (TGZ) wget -O splunk-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-Linux-x86_64.tgz' wget -O splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz' -- Debian (DEB) wget -O splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb' wget -O splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb' -- RHEL (RPM) wget -O splunk-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f.x86_64.rpm' wget -O splunkforwarder-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f.x86_64.rpm' Kudos to ryanadler for this great tool https://github.com/ryanadler/downloadSplunk  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @mbissante  Below are the download links for 9.0.1 if this helps? Splunk Linux Tar file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-Linux-x86_64.... See more...
Hi @mbissante  Below are the download links for 9.0.1 if this helps? Splunk Linux Tar file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-Linux-x86_64.tgz Splunk Linux rpm file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm Splunk Linux Debian file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-linux-2.6-amd64.deb Splunk Linux Windows file - https://download.splunk.com/products/splunk/releases/9.0.1/windows/splunk-9.0.1-82c987350fde-x64-release.msi  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only versio... See more...
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only version of Splunk compatible with both versions of RHEL is Splunk 9.0, but it is impossible to download it directly from the splunk site. How can I download this older version? Thank you, Mauro
@RAVISHANKAR  Yes, a Splunk Enterprise Search Head running version 9.4.2 can communicate with Indexers running version 9.2.1. But It's recommended to upgrade all components to the same version to en... See more...
@RAVISHANKAR  Yes, a Splunk Enterprise Search Head running version 9.4.2 can communicate with Indexers running version 9.2.1. But It's recommended to upgrade all components to the same version to ensure full feature compatibility and support. Yes, UF 8.0.5 can still forward data to Splunk Indexers running 9.2.1 or 9.4.2. However, Splunk no longer provides full support for UF 8.0.x. Splunk Software Support Policy | Splunk  About upgrading to 8.0 READ THIS FIRST - Splunk Documentation
@meg Please verify your sourcetype, The Splunk Add-on for Sysmon for Linux supports the following source types: sysmon:linux 
Yes. The range of interoperability between UFs and receiving components (intermediate forwarders/indexers) is quite big. Even if the official documentation doesn't list something as supported, things... See more...
Yes. The range of interoperability between UFs and receiving components (intermediate forwarders/indexers) is quite big. Even if the official documentation doesn't list something as supported, things might just work. I've had UFs as old as 6.6 sending to version 9 indexers and it ran OK. There might be a minor issue with v9 UFs sending to older indexers because new UFs generate config change events which are supposed to go to indexes not present on older Splunk instances. The temporary walkaround for this is to disable the config tracker inputs on the UFs until the indexers are upgraded to v9. But even if you don't do that, they will generally work, it's just that those events will either land in your last chance index or will generate a warning about non-existent index and get dropped completely.
@meg  renderXml = false This setting is typically used in Universal Forwarder or inputs.conf for Windows Event Logs. If you're forwarding Linux logs, this setting might not be relevant unless you'... See more...
@meg  renderXml = false This setting is typically used in Universal Forwarder or inputs.conf for Windows Event Logs. If you're forwarding Linux logs, this setting might not be relevant unless you're using it in a specific context. Have you installed the below add-on to parse the data? Can you share your inputs.conf file here.  https://splunkbase.splunk.com/app/6652  https://docs.splunk.com/Documentation/AddOns/released/NixSysmon/Sourcetypes   
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to s... See more...
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to same version ? Also Splunk UF 8.0.5 will be able to talk to Indexers ? I read it will be able to talk but only we will not have splunk support for this versions and only we will have P3 support if any issues. Thanks
My linux logs cannot parsed in dashboard. My renderxml is setted to false  
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       di... See more...
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       disk:       filesystem:       network:       paging:       processes: exporters:   splunk_hec:     token: ""     endpoint: "https://testsplunk.com:8088"     source: "otelcol"     sourcetype: "_json"     index: "telemetry_test" service:   pipelines:     metrics:       receivers: [hostmetrics]       exporters: [splunk_hec]
Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driv... See more...
Another way to possibility achieve this goal, albeit slowly, is to use tokens in a Classic SimpleXML dashboard to execute a series of searches. <form version="1.1" theme="light"> <label>Token-driven repetition</label> <init> <set token="trace"/> </init> <fieldset submitButton="false"> <input type="dropdown" token="limit"> <label>Loop count</label> <choice value="0">0</choice> <default>0</default> <initialValue>0</initialValue> <fieldForLabel>count</fieldForLabel> <fieldForValue>count</fieldForValue> <search> <query>| makeresults count=5 | streamstats count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <eval token="current">if($value$&gt;0,$value$,null())</eval> <set token="trace"/> </change> </input> </fieldset> <row> <panel> <html> $trace$ </html> </panel> </row> <row> <panel> <table> <search> <query>| makeresults | fields - _time | eval counter=$current$</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 0"> <eval token="trace">if($result.counter$&gt;0,$trace$." ".$result.counter$,$trace$)</eval> <eval token="current">$result.counter$-1</eval> </condition> <condition match="$current$=0"> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> The idea being that the input (in this case, but you could use a row count from your initial field list) is used to limit the number of times the "loop" is executed. The panel executes a search and reduces the counter by one. There is a panel which essentially shows a trace to show that the search has been executed. Updated due to the way the null() function now operates with respect to unsetting tokens!
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk wo... See more...
Hi @isoutamo , You may be aware that Splunk has its panel that records license warnings and breaches, but once the number of warnings/breaches (I assume it was 5) exceeds the 30-day limit, Splunk would cut the data intake, and the panels become unusable. To make sure that data isn't completely cut off, we at our company made an app that keeps track whenever we hit the mark of 3 breaches in a 30-day rolling period. So, upon hitting the mark, the port flip comes into action, and it flips the default receiving port from 9997 to XXXX. Some random alphabet because the indexer discovery will determine the new port as well, once the indexer is restarted. This strategy was initially implemented as a port switch from 9997 to 9998, and the inputs.conf was configured in the usual static way, where I mention the names of the <server>:<port>  format, but later reformatted to suit the indexer discovery technique. What was strange about this technique was that we never had network issues in the search head during the classic forwarding technique, but noticed the same in the indexer discovery technique.  Also, to make sure the problem exists only after the indexer discovery, I simulated the same in a test environment and noticed the worse network usage when the indexers are not reachable, but still the Search head was usable. The only difference between the two environments is that production has a lot of incoming data to the indexers, and the SH also acts as the license master to a lot of other sources where whereas the test environment doesn't do the same. The data flow begins again as we switch the ports back to 9997 after midnight, once the new day license period starts and the SH is back to its normal state.
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "a... See more...
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "appdynamics-native"    "appdynamics-protobuf" Pl let us know resolution for this issue as our application will not support lower version of appdynamics.    Thanks  
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox res... See more...
Without knowing what you're trying to do, I couldn't answer that - if you managed to upload the app, then I would guess there might be some issues with your JS, but there may also be some sandbox restrictions around what you can do.