All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Mine did the same thing.  It would seem it is your version of Splunk.  Set up a test environment on a laptop or spare VM and run a newer version of Splunk and see if the problem remediates itself.  
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as... See more...
The OP is quite old.  It is possible that there was a bug in 9.2.3 that caused selectFirstSearchResult to not take effect.  I can confirm @tej57 's observation that the sample code behaves exactly as you asked in 9.4, too.      
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresul... See more...
Hi @chrisboy68, There are lots of options presented, but combining @yuanliu's response with a conversion from bill_date to year and month gives the output closest to "ID Cost by month": | makeresults format=csv data="bill_date,ID,Cost,_time 6/1/25,1,1.24,2025-06-16T12:42:41.282-04:00 6/1/25,1,1.4,2025-06-16T12:00:41.282-04:00 5/1/25,1,2.5,2025-06-15T12:42:41.282-04:00 5/1/25,1,2.2,2025-06-14T12:00:41.282-04:00 5/1/25,2,3.2,2025-06-14T12:42:41.282-04:00 5/1/25,2,3.3,2025-06-14T12:00:41.282-04:00 3/1/25,1,4.4,2025-06-13T12:42:41.282-04:00 3/1/25,1,5,2025-06-13T12:00:41.282-04:00 3/1/25,2,6,2025-06-13T12:42:41.282-04:00 3/1/25,2,6.3,2025-06-13T12:00:41.282-04:00" | eval _time=strptime(_time, "%FT%T.%N%z") ``` end test data ``` ``` assuming month/day/year for bill_date ``` | eval Month=strftime(strptime(bill_date, "%m/%e/%y"), "%Y-%m") | stats latest(Cost) as Cost by Month ID Month ID Cost ----- -- ---- 2025-03 1 4.4 2025-03 2 6 2025-05 1 2.5 2025-05 2 3.2 2025-06 1 1.24 You can alternatively use chart, xyseries, etc. to pivot the results: | chart latest(Cost) over ID by Month ID 2025-03 2025-05 2025-06 -- ------- ------- ------- 1 4.4 2.5 1.24 2 6 3.2
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed th... See more...
Hi @Namo, Make sure $SPLUNK_HOME/etc/auth/cacert.pem contains all certificates in the trust chain. If you're using a self-signed certificate, add this certificate to cacert.pem. If you've changed the name or location of the file, update the new file. If you're also attempting a KV store upgrade, check the prerequisites at https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/administer-the-app-key-value-store/upgrade-the-kv-store-server-version#ariaid-title2 as others have recommended. Also note that your private key must be encrypted with the correct sslPassword value in server.conf for a KV store upgrade to succeed. When using a blank/empty password, you'll see a message similar to the following in splunkd.log: 06-21-2025 00:00:00.000 -0000 WARN KVStoreUpgradeToolTLS [133719 KVStoreConfigurationThread] - Incomplete TLS settings detected, skipping creation of KVStore TLS credentials file!  
That is perfect. Exactly what I needed. This was the most helpful reply to any question I think I have ever posted to a forum.
I changed the time and the pack size, but the problem still exists.
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of an... See more...
Hi @kn450  Splunk Stream requires NetFlow v9/IPFIX templates to be received before it can decode flow records; if templates arrive infrequently or are missed, flows are dropped. I'm not aware of any specific known issues around this, but I certainly think it is worth configuring Flowmon to send templates much more frequently (ideally every 20–30 seconds, not just every 600 seconds or 4096 packets) and see if this alleviate the issue.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not deco... See more...
Hi Splunk Community, I'm currently integrating Flowmon ndr as a NetFlow data exporter to Splunk Stream, but I’m encountering a persistent issue where Splunk receives the flow data, yet it’s not decoded properly, and flow sets are being dropped due to missing templates. Here’s the warning from the Splunk log: ``` 2025-06-21 08:34:49 WARN [139703701448448] (NetflowManager/NetflowDecoder.cpp:1282) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 258 received for observation domain id 13000 from device 10.x.x.x. Dropping flow data set of size 328 ``` Setup details: Exporter: Flowmon Collector: Splunk Stream  Protocol: NetFlow v9 (also tested with IPFIX) Transport: UDP  Template Resend Configuration: Every 4096 packets or  600 seconds Despite verifying these settings on Flowmon, Splunk continues to report that the template ID (in this case, 258) was never received, causing all related flows to be dropped. My questions: 1. Has anyone successfully integrated Flowmon with Splunk Stream using NetFlow v9? 2. Is there a known issue with Splunk Stream not handling templates properly from certain exporters? 3. Are there any recommended Splunk Stream configuration tweaks for handling late or infrequent templates? Any insights, experiences, or troubleshooting tips would be greatly appreciated. Thanks in advance!
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in s... See more...
It sounds like your Heavy Forwarder (HF) is struggling to handle the high volume of Akamai logs (30k events in 5 minutes), which may be causing the GUI to become slow or unresponsive. The error in splunkd.log about the TA-Akamai-SIEM modular input timing out (exceeding 30,000 ms) suggests the modular input script is overloaded. Since data ingestion continues and splunkd is running, the issue is likely related to resource contention or configuration. Here’s how you can troubleshoot and resolve it: Check HF Resource Usage: Monitor CPU, memory, and disk I/O on the HF using top or htop (Linux) or Task Manager (Windows). High resource usage could indicate the HF is overwhelmed by the Akamai log volume. Use the Splunk Monitoring Console (| mcatalog) or | rest /services/server/info to check system metrics like CPU usage or memory consumption on the HF. Tune Modular Input Timeout: The TA-Akamai-SIEM modular input is timing out after 30 seconds (30,000 ms). Increase the timeout in $SPLUNK_HOME/etc/apps/TA-Akamai-SIEM/local/inputs.conf: ini   [TA-Akamai-SIEM://<input_name>] interval = <your_interval> execution_timeout = 60000 # Increase to 60 seconds Restart the HF after making this change ($SPLUNK_HOME/bin/splunk restart). Optimize TA-Akamai-SIEM Configuration: Check the interval setting for the Akamai input in inputs.conf. A very short interval (e.g., 60 seconds) with high data volume (30k events/5 min) could overload the modular input. Consider increasing the interval (e.g., to 300 seconds) to reduce the frequency of API calls. Verify the API query filters in the TA configuration. Narrow the scope (e.g., specific Akamai configurations or event types) to reduce the data volume if possible. Address GUI Unresponsiveness: The GUI slowdown may be due to splunkd prioritizing data ingestion over web requests. Check $SPLUNK_HOME/etc/system/local/web.conf for max_threads or http_port settings. Increase max_threads if it’s too low: ini   [settings] max_threads = 20 # Default is 10; adjust cautiously Confirm the HF’s web port (default 8000) is accessible via telnet <HF_IP> 8000 from your machine. Inspect splunkd.log Further: Look for additional errors in $SPLUNK_HOME/var/log/splunk/splunkd.log related to TA-Akamai-SIEM or resource exhaustion (e.g., memory or thread limits). Check for errors in $SPLUNK_HOME/var/log/splunk/web_service.log for GUI-specific issues. Scale or Offload Processing: If the HF is underpowered, consider upgrading its hardware (more CPU cores or RAM) to handle the 30k events/5 min load. Alternatively, distribute the load by deploying multiple HFs and splitting the Akamai inputs across them, forwarding to the same indexers. Ensure the TA-Akamai-SIEM add-on is only installed on the HF (not the Search Head or indexers) to avoid unnecessary processing. Engage Splunk Support: Since Support reviewed the diag file, ask them to specifically analyze the TA-Akamai-SIEM modular input logs and any resource-related errors in splunkd.log. Share the timeout error and data volume details.
This is exactly the way to solve this problem.  Honestly, as you start to really master splunk you will find that Stats seems to be the answer for everything.   this is a very helpful presentation... See more...
This is exactly the way to solve this problem.  Honestly, as you start to really master splunk you will find that Stats seems to be the answer for everything.   this is a very helpful presentation on your very problem.   let-stats-sort-them-out-building-complex-result-sets-that-use-multiple-source-types.pdf slide 33
The most important lesson you can learn here is: Don't join.  Meanwhile, your description is inconsistent about which field is really hostname, and even which index is "main".  The following will mim... See more...
The most important lesson you can learn here is: Don't join.  Meanwhile, your description is inconsistent about which field is really hostname, and even which index is "main".  The following will mimic what you get but with better performance. (index=infrastructure_reports source=nutanix_vm_host_report) OR (index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini) |eval block = coalesce(block, 'spec.name') | fields block spec.cluster spec.resources.memory_size_mib operating_system | stats values(*) as * by block I vaguely get the sense that the "main" index - I assume that's infrastructure which produces spec.name field - lacks domain.com in some events, causing missed matches.  You cannot solve this problem by wildcard. One key piece of information you also did not clarify is what possible values of "domain.com" are, given that this is simply a stand-in string.  If there are more than one value for "domain.com", and "name" part could match multiple "domain.com" and represent different hostnames, your problem is unsolvable. The only way the problem is solvable is if "domain.com" doesn't matter, i.e., if "name" part is unique for any hostname.  If this is the case, you can strip out the "domain.com" part in spec.name. (index=infrastructure_reports source=nutanix_vm_host_report) OR (index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini) | rex field=spec.name "^(?<block>[^\.]+)" | fields block spec.cluster spec.resources.memory_size_mib operating_system | stats values(*) as * by block  
Hi @Karthikeya  Can you check if port 8000 is running on the host? How did you leave it with support? If they've got the diag they should have a lot more info on what the issue could be here. What ... See more...
Hi @Karthikeya  Can you check if port 8000 is running on the host? How did you leave it with support? If they've got the diag they should have a lot more info on what the issue could be here. What are the specs of the HF box? It does sound like it could be under pressure which could be causing issues with the UI but its too hard to say, please share as much info as possible.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.nam... See more...
I am looking for away to join results from two indexes based on the hostname. The main index has the hostname as just name and the second index has it by just name.domain.com. The fields are spec.name and block. I tried to wild card it, but the results were erratic. index=infrastructure_reports source=nutanix_vm_host_report | fields spec.name spec.cluster_reference.name spec.resources.memory_size_mib |rename spec.name as block |join block* [ search index=syslog_main source=/var/log/messages sourcetype=linux_messages_syslog path=/tmp/jira_assets_extract.ini | fields block app_stack operating_system] | table block spec.cluster spec.resources.memory_size_mib operating_system  
We have recently implemented HF in our environment as a part of ingesting akamai logs to Splunk. Installed akamai add-on on HF and forwarding the logs to indexers. The thing is data is more in akamai... See more...
We have recently implemented HF in our environment as a part of ingesting akamai logs to Splunk. Installed akamai add-on on HF and forwarding the logs to indexers. The thing is data is more in akamai (30k events in last 5 minutes). Today our HF GUI is very slow and not at all loading. Tried to restart but still the same. But data ingestion is still going on (checked in SH). Not sure what caused HF not to load. Splunkd is still running backend. web.conf also seems fine. Checked with Splunk support and they checked diag file and it seems fine.    Below is one of the error I noticed in splunkd.log:   ERROR ModularInputs \[10639 TcpChannelThread\] - Argument validation for scheme = TA-Akamai-SIEM; killing process, because executing it took too long (over 30000 mse cs.)  
Hi @aa-splunk We are facing a similar issue where its not extracting the fields inside the tuple, did you manage to get any further with your transformation or are you still in props / transforms ... See more...
Hi @aa-splunk We are facing a similar issue where its not extracting the fields inside the tuple, did you manage to get any further with your transformation or are you still in props / transforms hell? Thanks
Karma to both answers above.  Don't let ES run on the Cluster Master and It sounds like you have beefy servers, but ES can bring the beefiest of servers to its knees if you are not careful.  The Splu... See more...
Karma to both answers above.  Don't let ES run on the Cluster Master and It sounds like you have beefy servers, but ES can bring the beefiest of servers to its knees if you are not careful.  The Splunk ES Content Pack has close to 6000 (I could be underestimating the number it could be higher) correlation / finding searches.  If someone goes in and turns on all of those searches and you have your own stuff running, Splunk will absolutely choke and die.   As a best practice I track each and every search that is running on my ES instances and map the time windows that those searches run.  Any new searches activated are set to run in the time windows that have the least amount of searches running.  Just remember that the general rule of thumb is that each search that runs, occupies one cpu core and one gig of ram while running.   Additionally, if you have the resources, you can open the pandora's box of multithreading and / or allowing more concurrent searches - but do this ONLY as a last resort and you should validate by running top, or checking the Management console or whatever tool you use to validate that you have spare cpu and ram to allow more concurrent searches.  
@livehybrid  I will check,Thanks
@livehybrid recommended using a lookup to track your Forwarders.  I have to say that this is a really valuable tool, because if you keep track of your forwarders using a lookup, you can see what syst... See more...
@livehybrid recommended using a lookup to track your Forwarders.  I have to say that this is a really valuable tool, because if you keep track of your forwarders using a lookup, you can see what systems have not reported easily but you can also see any new forwarders that are sending logs to your system that you didn't know about.   Below is a youtube video tutorial on using the lookup to track systems no longer sending logs.   https://youtu.be/lo4_MIfTJzI?si=WfHxtBzTHLxmhQpe All of the posts are good ideas.  The lookup is just one way to do it that is quick and easy, but there are many ways to do things in splunk, this is just my favorite way.  
It looks like your mcollect command isn’t writing data to the metrics_new index, despite mpreview returning results. The "No results to summary index" error and the info.csv limit warning suggest a ... See more...
It looks like your mcollect command isn’t writing data to the metrics_new index, despite mpreview returning results. The "No results to summary index" error and the info.csv limit warning suggest a couple of potential issues. Here’s how you can troubleshoot and fix it: Verify mcollect Syntax: Ensure the fields required by mcollect are present. For metrics data, mcollect expects _value (numeric metric value), metric_name, and any dimensions (e.g., env, server_name). Your mpreview query should already include these, but confirm the output includes _value and metric_name:process.java.gc.collections. Try adding | fields metric_name, _value, env before mcollect to explicitly pass only the required fields: spl   | mpreview index=metrics_old target_per_timeseries=5 filter="metric_name IN (process.java.gc.collections) env IN (server_name:port)"| fields metric_name, _value, env | mcollect index=metrics_new Check Index Configuration: Confirm metrics_new is a metrics index (not an event index) on both the search head and indexers. Run | eventcount summarize=false index=metrics_new to verify the index exists and is accessible. Ensure the index is not full or disabled. Check $SPLUNK_HOME/etc/apps/<app>/local/indexes.conf on the indexers for metrics_new settings and verify frozenTimePeriodInSecs or maxTotalDataSizeMB aren’t causing issues. Address info.csv Limit Warning: The warning about info.csv reaching its limit (65,149 messages) suggests the search is generating excessive log messages, which might indicate a problem with the mpreview output or internal processing. This can sometimes prevent mcollect from writing results. Increase the info.csv limit in $SPLUNK_HOME/etc/system/local/limits.conf: ini   [search] max_messages_per_info_csv = 100000 Alternatively, reduce the volume of data processed by narrowing the time range or tightening the filter in mpreview (e.g., specify a single env value). Inspect Search Logs: The job inspector mentions errors in search.log. Check $SPLUNK_HOME/var/log/splunk/search.log for details on why mcollect is failing. Look for errors related to index access, field mismatches, or data format issues. Common issues include missing _value fields or incompatible data types for mcollect. Test with a Simpler Query: To isolate the issue, try a minimal mcollect command: spl   | mpreview index=metrics_old target_per_timeseries=1 filter="metric_name=process.java.gc.collections"| mcollect index=metrics_new If this works, gradually add back the env filter to identify where the issue arises. Next Steps: Run the modified query with | fields and check if data writes to metrics_new using | mpreview index=metrics_new. Share any specific errors from search.log or confirm if the fields (metric_name, _value, env) are present in the mpreview output for further assistance.
Let me be completely transparent on this answer.  I do not know anything about what I am about to respond.  I put your question into Grok and am giving back what it says.  So if it is way off, I apol... See more...
Let me be completely transparent on this answer.  I do not know anything about what I am about to respond.  I put your question into Grok and am giving back what it says.  So if it is way off, I apologize.  It sounds like you are using Splunk Observability.  If you are just trying to pull metrics logs of the OS of a system, this is much easier and I would just use the Splunk Linux TA as a guide for the scripts to pull that off a Linux box and the Windows TA for windows, but If my gut is right, that is not your problem and it is Splunk observability cloud is what you are actually looking at.   So here is the cut and paste from Grok. To fetch actual metric values (time-series data) in Splunk Observability Cloud using REST APIs, you can use the ** /v2/datapoint** endpoint, which retrieves data points for specified metrics. Unlike the Metrics Catalog endpoints (e.g., /v2/metric), which return metadata like metric names and dimensions, the /v2/datapoint endpoint provides the numerical values for metrics over a specified time range. Here’s how you can approach it: Endpoint: Use GET /v2/datapoint or POST /v2/datapoint to query metric values. The POST method is useful for complex queries with multiple metrics or filters. Authentication: Include an access token in the header (X-SF-TOKEN: <YOUR_ORG_TOKEN>). You can find your org token in the Splunk Observability Cloud UI under Settings > Access Tokens. Query Parameters: Specify the metric name(s) you want to query (e.g., cpu.utilization). Use dimensions to filter the data (e.g., host:server1). Define the time range with startTs and endTs (Unix timestamps in milliseconds) or a relative time range (e.g., -1h for the last hour). Set the resolution (e.g., 10s for 10-second intervals). Example Request (using curl): bash curl --request POST \   --header "Content-Type: application/json" \   --header "X-SF-TOKEN: <YOUR_ORG_TOKEN>" \   --data '{     "metrics": [       {         "name": "cpu.utilization",         "dimensions": {"host": "server1"}       }     ],     "startTs": 1697059200000,     "endTs": 1697062800000,     "resolution": "10s"   }' \   https://api.<REALM>.signalfx.com/v2/datapoint Replace <YOUR_ORG_TOKEN> with your access token and <REALM> with your Splunk Observability realm (e.g., us0, found in your profile). Response: The API returns a JSON object with time-series data points, including timestamps and values for the specified metric(s). For example: json {   "cpu.utilization": [     {"timestamp": 1697059200000, "value": 45.2, "dimensions": {"host": "server1"}},     {"timestamp": 1697059210000, "value": 47.8, "dimensions": {"host": "server1"}}   ] } Tips: Use the Metric Finder in the Splunk Observability Cloud UI to confirm metric names and dimensions. If you’re using OpenTelemetry, ensure your Collector is configured to send metrics to Splunk Observability Cloud. For detailed documentation, check the Splunk Observability Cloud developer portal: https://dev.splunk.com/observability/docs/datapoint_endpoint/.[](https://help.splunk.com/en/splunk-observability-cloud/manage-data/other-data-ingestion-methods/other-data-ingestion-methods/send-data-using-rest-apis) If you’re still getting metadata, verify you’re not using /v2/metric or /v2/metricstore/metrics endpoints, which are for metadata only.