All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. Ok. You're searching by full json paths which probably means that you're using indexed extractions. This is generally Not Good (tm). 2. You're using the table command at the end. It creates a sum... See more...
1. Ok. You're searching by full json paths which probably means that you're using indexed extractions. This is generally Not Good (tm). 2. You're using the table command at the end. It creates a summary table which does not do any additional formating. You might try to do | fields logs | fields - _raw _time | rename logs as _raw instead of the table command and use event list widget instead of table but I'm not sure it will look good.
"AdditionalData":{"time":"2025-06-19T11:52:37","testName":"CheckLiveRatesTest","testClass":"Automation.TestsFolder","fullName":"Automation.TestsFolder","repoUrl":"***","pipelineName":"***","buildId":... See more...
"AdditionalData":{"time":"2025-06-19T11:52:37","testName":"CheckLiveRatesTest","testClass":"Automation.TestsFolder","fullName":"Automation.TestsFolder","repoUrl":"***","pipelineName":"***","buildId":"291","platform":"Backend","buildUrl":"https://github.com/","domain":"***","team":"***","env":"PreProd","status":"Failed","testDuration":"00:00:51.763","retry":1,"maxRetries":1,"isFinalResult":true,"errorMessage":" Verify live rates color\nAssert.That(market.VerifyLiveRatesColor(), is equal to 'true')\n Expected: True\n But was: False\n","stackTrace":" ***","triggeredManually":true,"hidden":false,"testLog":{"artifacts":{"Snapshot below: ":"http://www.dummyurl.com"},"logs":["[06/19/2025 11:51:45] Initializing BaseTestUI",["EndTime: 06/19/2025 11:51:47","Duration: 00:00:01.7646422","[06/19/2025 11:51:45] Driver configurations:\r\nIs local run: False\r\n
Please provide the raw event (not the formatted version e.g. {"AdditionalData": { "buildId":291,
AdditionalData: { [-] buildId: 291 buildUrl: https://github.com domain: *** env: PreProd errorMessage: Verify live rates color Assert.That(market.VerifyLiveRatesColor(), i... See more...
AdditionalData: { [-] buildId: 291 buildUrl: https://github.com domain: *** env: PreProd errorMessage: Verify live rates color Assert.That(market.VerifyLiveRatesColor(), is equal to 'true') Expected: True But was: False fullName: Automation.TestsFolder hidden: false isFinalResult: true maxRetries: 1 pipelineName: *** platform: Backend repoUrl: *** retry: 1 stackTrace: at *** status: Failed team: *** testCategories: [ [+] ] testClass: Automation.TestsFolder testDuration: 00:00:51.763 testLog: { [-] artifacts: { [+] } logs: [ [-] [06/19/2025 11:51:45] Initializing BaseTestUI [ [+] ] [06/19/2025 11:51:47] Initializing EtoroWorkFlows [ [+] ]   So if im using the query in my post, i don't see the [+] inside logs : .. i see it flat as one event
Please provide some anonymised sample events which demonstrate the issue you are facing. Ideally, place these in a code block (using the </> formatting option).
Thank you very much @PrewinThomas , with what you commented along with @bowesmana  I was able to specify what I needed
Applying this suggestion worked for me... I've tested it with more data, and so far there have been no inconsistencies. I really appreciate the input!
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* so... See more...
Hello I have a table in dashboard studio and i want to show a part of the json field which contains sub objects when running this  query : index="stg_observability_s" AdditionalData.testName=* sourcetype=SplunkQuality AdditionalData.domain="*" AdditionalData.pipelineName="*" AdditionalData.buildId="15757128291" AdditionalData.team="*" testCategories="*" AdditionalData.status="*" AdditionalData.isFinalResult="*" AdditionalData.fullName="***" | search AdditionalData.testLog.logs{}=* | spath path="AdditionalData.testLog.logs{}" output=logs | table logs the json looks flatten , i dont see the sub objects inside is there a way to fix it ?  thanks 
@tanjil  I recommend raising a Splunk Support ticket to request the 0 MB license file. Please ensure that the support case is submitted under your valid entitlement. Recently, one of our customers s... See more...
@tanjil  I recommend raising a Splunk Support ticket to request the 0 MB license file. Please ensure that the support case is submitted under your valid entitlement. Recently, one of our customers submitted a similar request, and Splunk provided the 0 MB license file for their heavy forwarder..
First thing to do would be to call out to your local friendly Splunk Partner or any other sales channel you might have used before. If you are a current Cloud customer you should be entitled to a 0 b... See more...
First thing to do would be to call out to your local friendly Splunk Partner or any other sales channel you might have used before. If you are a current Cloud customer you should be entitled to a 0 bytes license. It's typically used for a forwarder, but might also be used for accessing previously indexed data.
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manag... See more...
Hi everyone, We already have a Splunk Cloud environment, and on-premises we have a Splunk deployment server. However, the on-prem deployment server currently has no license — it's only used to manage forwarders and isn’t indexing any data. We now have some legacy logs stored locally that we’d like to search through without ingesting new data. For this, we’re looking to get a Splunk 0 MB license (search-only) on the deployment server. Is there any way to request or generate a 0 MB license for this use case? Thanks in advance for your help!
Mine did the same thing.  It would seem it is your version of Splunk.  Set up a test environment on a laptop or spare VM and run a newer version of Splunk and see if the problem remediates itself.  
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as... See more...
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as you asked in 9.4, too.      
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresul... See more...
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresults format=csv data="bill_date,ID,Cost,_time 6/1/25,1,1.24,2025-06-16T12:42:41.282-04:00 6/1/25,1,1.4,2025-06-16T12:00:41.282-04:00 5/1/25,1,2.5,2025-06-15T12:42:41.282-04:00 5/1/25,1,2.2,2025-06-14T12:00:41.282-04:00 5/1/25,2,3.2,2025-06-14T12:42:41.282-04:00 5/1/25,2,3.3,2025-06-14T12:00:41.282-04:00 3/1/25,1,4.4,2025-06-13T12:42:41.282-04:00 3/1/25,1,5,2025-06-13T12:00:41.282-04:00 3/1/25,2,6,2025-06-13T12:42:41.282-04:00 3/1/25,2,6.3,2025-06-13T12:00:41.282-04:00" | eval _time=strptime(_time, "%FT%T.%N%z") ``` end test data ``` ``` assuming month/day/year for bill_date ``` | eval Month=strftime(strptime(bill_date, "%m/%e/%y"), "%Y-%m") | stats latest(Cost) as Cost by Month ID Month ID Cost ----- -- ---- 2025-03 1 4.4 2025-03 2 6 2025-05 1 2.5 2025-05 2 3.2 2025-06 1 1.24 You can alternatively use chart, xyseries, etc. to pivot the results: | chart latest(Cost) over ID by Month ID 2025-03 2025-05 2025-06 -- ------- ------- ------- 1 4.4 2.5 1.24 2 6 3.2
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed th... See more...
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed the name or location of the file, update the new file. If you're also attempting a KV store upgrade, check the prerequisites at https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#ariaid-title2 as others have recommended. Also note that your private key must be encrypted with the correct sslPassword value in server.conf for a KV store upgrade to succeed. When using a blank/empty password, you'll see a message similar to the following in splunkd.log: 06-21-2025 00:00:00.000 -0000 WARN KVStoreUpgradeToolTLS [133719 KVStoreConfigurationThread] - Incomplete TLS settings detected, skipping creation of KVStore TLS credentials file!  
That is perfect. Exactly what I needed. This was the most helpful reply to any question I think I have ever posted to a forum.
I changed the time and the pack size, but the problem still exists.
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of an... See more...
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of any specific known issues around this, but I certainly think it is worth configuring Flowmon to send templates much more frequently (ideally every 20–30 seconds, not just every 600 seconds or 4096 packets) and see if this alleviate the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not deco... See more...
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not decoded properly, and flow sets are being dropped due to missing templates. Here’s the warning from the Splunk log: ``` 2025-06-21 08:34:49 WARN [139703701448448] (NetflowManager/NetflowDecoder.cpp:1282) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 258 received for observation domain id 13000 from device 10.x.x.x. Dropping flow data set of size 328 ``` Setup details: Exporter: Flowmon Collector: Splunk Stream  Protocol: NetFlow v9 (also tested with IPFIX) Transport: UDP  Template Resend Configuration: Every 4096 packets or  600 seconds Despite verifying these settings on Flowmon, Splunk continues to report that the template ID (in this case, 258) was never received, causing all related flows to be dropped. My questions: 1. Has anyone successfully integrated Flowmon with Splunk Stream using NetFlow v9? 2. Is there a known issue with Splunk Stream not handling templates properly from certain exporters? 3. Are there any recommended Splunk Stream configuration tweaks for handling late or infrequent templates? Any insights, experiences, or troubleshooting tips would be greatly appreciated. Thanks in advance!
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in s... See more...
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in splunkd.log about the TA-Akamai-SIEM modular input timing out (exceeding 30,000 ms) suggests the modular input script is overloaded. Since data ingestion continues and splunkd is running, the issue is likely related to resource contention or configuration. Here’s how you can troubleshoot and resolve it: Check HF Resource Usage: Monitor CPU, memory, and disk I/O on the HF using top or htop (Linux) or Task Manager (Windows). High resource usage could indicate the HF is overwhelmed by the Akamai log volume. Use the Splunk Monitoring Console (| mcatalog) or | rest /services/server/info to check system metrics like CPU usage or memory consumption on the HF. Tune Modular Input Timeout: The TA-Akamai-SIEM modular input is timing out after 30 seconds (30,000 ms). Increase the timeout in $SPLUNK_HOME/etc/apps/TA-Akamai-SIEM/local/inputs.conf: ini   [TA-Akamai-SIEM://<input_name>] interval = <your_interval> execution_timeout = 60000 # Increase to 60 seconds Restart the HF after making this change ($SPLUNK_HOME/bin/splunk restart). Optimize TA-Akamai-SIEM Configuration: Check the interval setting for the Akamai input in inputs.conf. A very short interval (e.g., 60 seconds) with high data volume (30k events/5 min) could overload the modular input. Consider increasing the interval (e.g., to 300 seconds) to reduce the frequency of API calls. Verify the API query filters in the TA configuration. Narrow the scope (e.g., specific Akamai configurations or event types) to reduce the data volume if possible. Address GUI Unresponsiveness: The GUI slowdown may be due to splunkd prioritizing data ingestion over web requests. Check $SPLUNK_HOME/etc/system/local/web.conf for max_threads or http_port settings. Increase max_threads if it’s too low: ini   [settings] max_threads = 20 # Default is 10; adjust cautiously Confirm the HF’s web port (default 8000) is accessible via telnet <HF_IP> 8000 from your machine. Inspect splunkd.log Further: Look for additional errors in $SPLUNK_HOME/var/log/splunk/splunkd.log related to TA-Akamai-SIEM or resource exhaustion (e.g., memory or thread limits). Check for errors in $SPLUNK_HOME/var/log/splunk/web_service.log for GUI-specific issues. Scale or Offload Processing: If the HF is underpowered, consider upgrading its hardware (more CPU cores or RAM) to handle the 30k events/5 min load. Alternatively, distribute the load by deploying multiple HFs and splitting the Akamai inputs across them, forwarding to the same indexers. Ensure the TA-Akamai-SIEM add-on is only installed on the HF (not the Search Head or indexers) to avoid unnecessary processing. Engage Splunk Support: Since Support reviewed the diag file, ask them to specifically analyze the TA-Akamai-SIEM modular input logs and any resource-related errors in splunkd.log. Share the timeout error and data volume details.