All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please provide some anonymised sample events which demonstrate the issue you are facing. Ideally, place these in a code block (using the </> formatting option).
Thank you very much @PrewinThomas , with what you commented along with @bowesmana  I was able to specify what I needed
Applying this suggestion worked for me... I've tested it with more data, and so far there have been no inconsistencies. I really appreciate the input!
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* so... See more...
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* sourcetype=SplunkQuality AdditionalData.domain="*" AdditionalData.pipelineName="*" AdditionalData.buildId="15757128291" AdditionalData.team="*" testCategories="*" AdditionalData.status="*" AdditionalData.isFinalResult="*" AdditionalData.fullName="***" | search AdditionalData.testLog.logs{}=* | spath path="AdditionalData.testLog.logs{}" output=logs | table logs the json looks flatten , i dont see the sub objects inside is there a way to fix it ?  thanks 
@tanjil  I recommend raising a Splunk Support ticket to request the 0 MB license file. Please ensure that the support case is submitted under your valid entitlement. Recently, one of our customers s... See more...
@tanjil  I recommend raising a Splunk Support ticket to request the 0 MB license file. Please ensure that the support case is submitted under your valid entitlement. Recently, one of our customers submitted a similar request, and Splunk provided the 0 MB license file for their heavy forwarder..
First thing to do would be to call out to your local friendly Splunk Partner or any other sales channel you might have used before. If you are a current Cloud customer you should be entitled to a 0 b... See more...
First thing to do would be to call out to your local friendly Splunk Partner or any other sales channel you might have used before. If you are a current Cloud customer you should be entitled to a 0 bytes license. It's typically used for a forwarder, but might also be used for accessing previously indexed data.
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manag... See more...
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manage forwarders and isn’t indexing any data. We now have some legacy logs stored locally that we’d like to search through without ingesting new data. For this, we’re looking to get a Splunk 0 MB license (search-only) on the deployment server. Is there any way to request or generate a 0 MB license for this use case? Thanks in advance for your help!
Mine did the same thing.  It would seem it is your version of Splunk.  Set up a test environment on a laptop or spare VM and run a newer version of Splunk and see if the problem remediates itself.  
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as... See more...
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as you asked in 9.4, too.      
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresul... See more...
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresults format=csv data="bill_date,ID,Cost,_time 6/1/25,1,1.24,2025-06-16T12:42:41.282-04:00 6/1/25,1,1.4,2025-06-16T12:00:41.282-04:00 5/1/25,1,2.5,2025-06-15T12:42:41.282-04:00 5/1/25,1,2.2,2025-06-14T12:00:41.282-04:00 5/1/25,2,3.2,2025-06-14T12:42:41.282-04:00 5/1/25,2,3.3,2025-06-14T12:00:41.282-04:00 3/1/25,1,4.4,2025-06-13T12:42:41.282-04:00 3/1/25,1,5,2025-06-13T12:00:41.282-04:00 3/1/25,2,6,2025-06-13T12:42:41.282-04:00 3/1/25,2,6.3,2025-06-13T12:00:41.282-04:00" | eval _time=strptime(_time, "%FT%T.%N%z") ``` end test data ``` ``` assuming month/day/year for bill_date ``` | eval Month=strftime(strptime(bill_date, "%m/%e/%y"), "%Y-%m") | stats latest(Cost) as Cost by Month ID Month ID Cost ----- -- ---- 2025-03 1 4.4 2025-03 2 6 2025-05 1 2.5 2025-05 2 3.2 2025-06 1 1.24 You can alternatively use chart, xyseries, etc. to pivot the results: | chart latest(Cost) over ID by Month ID 2025-03 2025-05 2025-06 -- ------- ------- ------- 1 4.4 2.5 1.24 2 6 3.2
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed th... See more...
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed the name or location of the file, update the new file. If you're also attempting a KV store upgrade, check the prerequisites at https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#ariaid-title2 as others have recommended. Also note that your private key must be encrypted with the correct sslPassword value in server.conf for a KV store upgrade to succeed. When using a blank/empty password, you'll see a message similar to the following in splunkd.log: 06-21-2025 00:00:00.000 -0000 WARN KVStoreUpgradeToolTLS [133719 KVStoreConfigurationThread] - Incomplete TLS settings detected, skipping creation of KVStore TLS credentials file!  
That is perfect. Exactly what I needed. This was the most helpful reply to any question I think I have ever posted to a forum.
I changed the time and the pack size, but the problem still exists.
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of an... See more...
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of any specific known issues around this, but I certainly think it is worth configuring Flowmon to send templates much more frequently (ideally every 20–30 seconds, not just every 600 seconds or 4096 packets) and see if this alleviate the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not deco... See more...
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not decoded properly, and flow sets are being dropped due to missing templates. Here’s the warning from the Splunk log: ``` 2025-06-21 08:34:49 WARN [139703701448448] (NetflowManager/NetflowDecoder.cpp:1282) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 258 received for observation domain id 13000 from device 10.x.x.x. Dropping flow data set of size 328 ``` Setup details: Exporter: Flowmon Collector: Splunk Stream  Protocol: NetFlow v9 (also tested with IPFIX) Transport: UDP  Template Resend Configuration: Every 4096 packets or  600 seconds Despite verifying these settings on Flowmon, Splunk continues to report that the template ID (in this case, 258) was never received, causing all related flows to be dropped. My questions: 1. Has anyone successfully integrated Flowmon with Splunk Stream using NetFlow v9? 2. Is there a known issue with Splunk Stream not handling templates properly from certain exporters? 3. Are there any recommended Splunk Stream configuration tweaks for handling late or infrequent templates? Any insights, experiences, or troubleshooting tips would be greatly appreciated. Thanks in advance!
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in s... See more...
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in splunkd.log about the TA-Akamai-SIEM modular input timing out (exceeding 30,000 ms) suggests the modular input script is overloaded. Since data ingestion continues and splunkd is running, the issue is likely related to resource contention or configuration. Here’s how you can troubleshoot and resolve it: Check HF Resource Usage: Monitor CPU, memory, and disk I/O on the HF using top or htop (Linux) or Task Manager (Windows). High resource usage could indicate the HF is overwhelmed by the Akamai log volume. Use the Splunk Monitoring Console (| mcatalog) or | rest /services/server/info to check system metrics like CPU usage or memory consumption on the HF. Tune Modular Input Timeout: The TA-Akamai-SIEM modular input is timing out after 30 seconds (30,000 ms). Increase the timeout in $SPLUNK_HOME/etc/apps/TA-Akamai-SIEM/local/inputs.conf: ini   [TA-Akamai-SIEM://<input_name>] interval = <your_interval> execution_timeout = 60000 # Increase to 60 seconds Restart the HF after making this change ($SPLUNK_HOME/bin/splunk restart). Optimize TA-Akamai-SIEM Configuration: Check the interval setting for the Akamai input in inputs.conf. A very short interval (e.g., 60 seconds) with high data volume (30k events/5 min) could overload the modular input. Consider increasing the interval (e.g., to 300 seconds) to reduce the frequency of API calls. Verify the API query filters in the TA configuration. Narrow the scope (e.g., specific Akamai configurations or event types) to reduce the data volume if possible. Address GUI Unresponsiveness: The GUI slowdown may be due to splunkd prioritizing data ingestion over web requests. Check $SPLUNK_HOME/etc/system/local/web.conf for max_threads or http_port settings. Increase max_threads if it’s too low: ini   [settings] max_threads = 20 # Default is 10; adjust cautiously Confirm the HF’s web port (default 8000) is accessible via telnet <HF_IP> 8000 from your machine. Inspect splunkd.log Further: Look for additional errors in $SPLUNK_HOME/var/log/splunk/splunkd.log related to TA-Akamai-SIEM or resource exhaustion (e.g., memory or thread limits). Check for errors in $SPLUNK_HOME/var/log/splunk/web_service.log for GUI-specific issues. Scale or Offload Processing: If the HF is underpowered, consider upgrading its hardware (more CPU cores or RAM) to handle the 30k events/5 min load. Alternatively, distribute the load by deploying multiple HFs and splitting the Akamai inputs across them, forwarding to the same indexers. Ensure the TA-Akamai-SIEM add-on is only installed on the HF (not the Search Head or indexers) to avoid unnecessary processing. Engage Splunk Support: Since Support reviewed the diag file, ask them to specifically analyze the TA-Akamai-SIEM modular input logs and any resource-related errors in splunkd.log. Share the timeout error and data volume details.
This is exactly the way to solve this problem.  Honestly, as you start to really master splunk you will find that Stats seems to be the answer for everything.   this is a very helpful presentation... See more...
This is exactly the way to solve this problem.  Honestly, as you start to really master splunk you will find that Stats seems to be the answer for everything.   this is a very helpful presentation on your very problem.   let-stats-sort-them-out-building-complex-result-sets-that-use-multiple-source-types.pdf slide 33
The most important lesson you can learn here is: Don't join.  Meanwhile, your description is inconsistent about which field is really hostname, and even which index is "main".  The following will mim... See more...
The most important lesson you can learn here is: Don't join.  Meanwhile, your description is inconsistent about which field is really hostname, and even which index is "main".  The following will mimic what you get but with better performance. (index=infrastructure_reports source=nutanix_vm_host_report) OR (index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini) |eval block = coalesce(block, 'spec.name') | fields block spec.cluster spec.resources.memory_size_mib operating_system | stats values(*) as * by block I vaguely get the sense that the "main" index - I assume that's infrastructure which produces spec.name field - lacks domain.com in some events, causing missed matches.  You cannot solve this problem by wildcard. One key piece of information you also did not clarify is what possible values of "domain.com" are, given that this is simply a stand-in string.  If there are more than one value for "domain.com", and "name" part could match multiple "domain.com" and represent different hostnames, your problem is unsolvable. The only way the problem is solvable is if "domain.com" doesn't matter, i.e., if "name" part is unique for any hostname.  If this is the case, you can strip out the "domain.com" part in spec.name. (index=infrastructure_reports source=nutanix_vm_host_report) OR (index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini) | rex field=spec.name "^(?<block>[^\.]+)" | fields block spec.cluster spec.resources.memory_size_mib operating_system | stats values(*) as * by block  
Hi @Karthikeya  Can you check if port 8000 is running on the host? How did you leave it with support? If they've got the diag they should have a lot more info on what the issue could be here. What ... See more...
Hi @Karthikeya  Can you check if port 8000 is running on the host? How did you leave it with support? If they've got the diag they should have a lot more info on what the issue could be here. What are the specs of the HF box? It does sound like it could be under pressure which could be causing issues with the UI but its too hard to say, please share as much info as possible.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.nam... See more...
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.name and block. I tried to wild card it, but the results were erratic. index=infrastructure_reports source=nutanix_vm_host_report | fields spec.name spec.cluster_reference.name spec.resources.memory_size_mib |rename spec.name as block |join block* [ search index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini | fields block app_stack operating_system] | table block spec.cluster spec.resources.memory_size_mib operating_system