All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes I'm aware that MC has these panels where you could see some statistics about license usage.  Unfortunately those are not aligned with current license policy If you have official enterprise th... See more...
Yes I'm aware that MC has these panels where you could see some statistics about license usage.  Unfortunately those are not aligned with current license policy If you have official enterprise then the license policy is 45/60 not 3/30. And if your license size is 100GB+ then it's nonblocking for searches. And independent of your license there is no blocking for ingesting side. So even you have done hard license breach the ingesting is still working, but you cannot make any searches except for internal indexes (figure out why you have done breach)! I'm expecting, that as you have LM in your SH, and you are sending your internal logs (including those which LM is needed) to your idx cluster, except when you have blocked your indexer discovery by wrong port information, you have hit by "missing connection to LM" instead of indexing internal logs. Anyhow you shouldn't flip any internal logs as those are not counted towards your license usage. Only real data which are sent to an other than _* indexes are counted as indexed data. Usually all that date is coming from UF/HF not your SH etc.  So my proposal is that you just switch receiving port of indexers to some valid port which are allowed only from SH side by FW. Then your SH (and LM in SH) can continue send their internals into indexers and everything should works. And same time all UFs with static indexer information cannot send events as the receiving port has changed. If you have any real dat inputs on SH then you should set up HF and move those inputs there. Of course the real fix is buy enough big splunk license....
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfull... See more...
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfully, however, it seems like this is not begin maintained.  No new versions of the Add-on have been released to keep up with the changes in the helm chart. I was able to successfully update from the default image on this version (0.86.0) to latest (0.127.0) however, the EKS Add-on creates the config map that is mounted to the agents with some deprecated values that are no longer valid for the latest version of the image. Is there any intent on maintaining this EKS Add-on? or is the recommendation to migrate to the helm chart? (https://github.com/signalfx/splunk-otel-collector-chart)
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signal... See more...
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signalflow-client-java I create the transport and client, then go in a loop an execute the same query once per iteration with an updated start time each time.  I read all the messages in the iterator, though I ignore some types.  I close the computation at the end of each iteration. The query seems to work fine.  I get the data I expect. The stack trace looks like this: Jun 27, 2025 4:33:16 PM com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser close SEVERE: failed to close event stream org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:250) at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:183) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:210) at org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:312) at org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:142) at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228) at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172) at java.base/sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:377) at java.base/sun.nio.cs.StreamDecoder.close(StreamDecoder.java:205) at java.base/java.io.InputStreamReader.close(InputStreamReader.java:192) at java.base/java.io.BufferedReader.close(BufferedReader.java:525) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser.close(ServerSentEventsTransport.java:476) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportChannel.close(ServerSentEventsTransport.java:396) at com.signalfx.signalflow.client.Computation.close(Computation.java:168) my code here  Should I be doing something different? thanks
Thanks.  I'm thinking it might have been a time sync issue.  If I set the start time slightly in the past, like even 1 second, it works. What I've settled on is setting the start time on the minute ... See more...
Thanks.  I'm thinking it might have been a time sync issue.  If I set the start time slightly in the past, like even 1 second, it works. What I've settled on is setting the start time on the minute (no seconds) with the same value for the end time.  That seems to return the single record that I want. thanks
@kiran_panchavat  - this isnt strictly true, @ramiiitnzv What is the reason you're trying to use a dev license with ES? If you are a customer and want to try out using ES then you should speak to you... See more...
@kiran_panchavat  - this isnt strictly true, @ramiiitnzv What is the reason you're trying to use a dev license with ES? If you are a customer and want to try out using ES then you should speak to your sales account team within Splunk, if you dont know who this is then you can try going via https://www.splunk.com/en_us/talk-to-sales.html. If you are wanting to build apps that integrate with ES then Dev license is probably appropriate, but as others have said, you dont automatically get access the ES app within Splunkbase as this is based on entitlements. Ultimately I think if you need access to ES then its the sales team who can grant access, if you explain to them the reasoning they should be able to find a resolution for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
I know this is an old forum but I'm having this same issue. My distributedsearch.conf was empty. Shoud I add the values mentioned in that scenario?     
Here is an enhanced version of the dashboard which performs the actions you described (more or less). <form version="1.1" theme="light"> <label>Token-driven repetition save</label> <row> <pa... See more...
Here is an enhanced version of the dashboard which performs the actions you described (more or less). <form version="1.1" theme="light"> <label>Token-driven repetition save</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="field value_1 value_2" | stats count as counter</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 1"> <eval token="current">if($result.counter$ &gt; 0,$result.counter$,null())</eval> <set token="trace"></set> </condition> <condition> <set token="trace"></set> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <search> <query>| makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test_2.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+field+"\" | outputlookup append=t test_2.csv")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <table> <title>$current$</title> <search> <query>| makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test_2.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+field+"\" | outputlookup append=t test_2.csv") | eval counter=$current$ | tail $current$ | reverse</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$result.counter$ &gt; 1"> <set token="spl">$result.spl$</set> <eval token="current">if($result.counter$ &gt; 1,$result.counter$-1,null())</eval> </condition> <condition> <eval token="spl">if($result.counter$ &gt; 0,$result.spl$,null())</eval> <unset token="current"/> </condition> </done> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <search> <query>$spl$</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <unset token="spl"></unset> </done> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
Hi @danielbb  My understanding on this (and I'd also be pleased if someone can confirm!) is that api_lt and api_et represent the time parameters provided by the user in the time picker or API when r... See more...
Hi @danielbb  My understanding on this (and I'd also be pleased if someone can confirm!) is that api_lt and api_et represent the time parameters provided by the user in the time picker or API when running a search, but search_lt and search_et represent the actual earliest and latest time used by Splunk during the search execution. If the user specifies an earliest/latest in the search for example, this would override the time picker values (api_et/api_lt). If not earliest/latest in the search then search_et/lt become api_lt. I dont recall seeing docs around this though so if someone can find any please let me know  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Here is old answer for upgrade order of nodes in distributed environment. https://community.splunk.com/t5/All-Apps-and-Add-ons/Upgrading-Apps-and-Add-ons-in-distributed-environment/m-p/554548/highlig... See more...
Here is old answer for upgrade order of nodes in distributed environment. https://community.splunk.com/t5/All-Apps-and-Add-ons/Upgrading-Apps-and-Add-ons-in-distributed-environment/m-p/554548/highlight/true#M65820 Quite probably you can this with different order, but then you will gotten some warnings when you are running it before those are in correct versions.
@ramiiitnzv  To obtain a license for the Splunk Enterprise Security (ES) app, you need to purchase it from Splunk. https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/introductio... See more...
@ramiiitnzv  To obtain a license for the Splunk Enterprise Security (ES) app, you need to purchase it from Splunk. https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/introduction/licensing-for-splunk-enterprise-security 
Are these fields mutually exclusive? I'm not sure about the relation between these four fields.
You must have a license to run ES before you can download it.  A Developer license does not grant access to ES.
I'm having Developer License but I'm unable to download the ES. Can any one help me in this.?
In addition to @kiran_panchavat, all the components support backward communication to n-3 Splunk versions in decreasing order of significance in architecture components. First tier is Management node... See more...
In addition to @kiran_panchavat, all the components support backward communication to n-3 Splunk versions in decreasing order of significance in architecture components. First tier is Management nodes like cluster manager, search head cluster deployer. Next would be components like Search Head, Indexer, and then comes the forwarders. 
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually config... See more...
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually configure ENGINID later I got the option of integrating my Node to SPLUNK ITSI and SC4SNMP whichi I did but intitially they didn't support EnginID auto discovery then I had Manually run the SNMPGET and provided the Engine ID for them.Now I am started sending my trap towards both the nodes ith same OID and ENgine ID.But My alarms are not getting to splunk index even though we will be able it capture it in the port of SC4SNMP.Later I found out that SPLUNkK ITST getting toe Same alarm same oid forwarded from the previous target.But this time target is using SNMPV2 and it sending as a community with a community string with few OIDs bundled together.Can this be the issue where my Nodes origina trap is not reaching the correct index?
To concur with the above answers, you would have to utilize a lookup file that lists out all of the sources you want to monitor. Natively, Splunk does not have a source = 0 events. (it doesn't know ... See more...
To concur with the above answers, you would have to utilize a lookup file that lists out all of the sources you want to monitor. Natively, Splunk does not have a source = 0 events. (it doesn't know what it doesn't know). In the environment we work in, we apply a siar approach but its based on host and whether the sources are coming in or not for our customers. | tstats values(source) as source, values(sourcetype) as sourcetype WHERE index=[index] [ | inputlookup [myHostLookup].csv | fields host ] by host | stats count, values(sourcetype) as sourcetype, values(source) as source by host | eval Reporting=if(isnull(source), "No Matching Sources", "Yes") | table host, Reporting, source, sourcetype --- If this reply helps you, Karma would be appreciated.
Hi @meg  Please can you confirm the sourcetype that you are using? Also, is this being read directly using a UF and sent to Splunk without going via other systems?  Are you ingesting this using th... See more...
Hi @meg  Please can you confirm the sourcetype that you are using? Also, is this being read directly using a UF and sent to Splunk without going via other systems?  Are you ingesting this using the Splunk Add-on for Sysmon for Linux on the UF?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @mbissante  Just a follow up on my previous post, the following are for 9.0.9 which was the last 9.0.x release: -------- Linux -------- -- Tarball (TGZ) wget -O splunk-9.0.9-6315942c563f-Linux-... See more...
Hi @mbissante  Just a follow up on my previous post, the following are for 9.0.9 which was the last 9.0.x release: -------- Linux -------- -- Tarball (TGZ) wget -O splunk-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-Linux-x86_64.tgz' wget -O splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz' -- Debian (DEB) wget -O splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb' wget -O splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb' -- RHEL (RPM) wget -O splunk-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f.x86_64.rpm' wget -O splunkforwarder-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f.x86_64.rpm' Kudos to ryanadler for this great tool https://github.com/ryanadler/downloadSplunk  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @mbissante  Below are the download links for 9.0.1 if this helps? Splunk Linux Tar file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-Linux-x86_64.... See more...
Hi @mbissante  Below are the download links for 9.0.1 if this helps? Splunk Linux Tar file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-Linux-x86_64.tgz Splunk Linux rpm file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm Splunk Linux Debian file - https://download.splunk.com/products/splunk/releases/9.0.1/linux/splunk-9.0.1-82c987350fde-linux-2.6-amd64.deb Splunk Linux Windows file - https://download.splunk.com/products/splunk/releases/9.0.1/windows/splunk-9.0.1-82c987350fde-x64-release.msi  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only versio... See more...
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only version of Splunk compatible with both versions of RHEL is Splunk 9.0, but it is impossible to download it directly from the splunk site. How can I download this older version? Thank you, Mauro