All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm also assuming that you've already set maxKBps = 0 in limits.conf: # $SPLUNK_HOME/etc/system/local/limits.conf [thruput] maxKBps = 0  
Hi @MichaelM1, Increasing parallelIngestionPipelines to a value larger than 1 is similar to running multiple instances of splunkd with splunktcp inputs on different ports. As a starting point, howev... See more...
Hi @MichaelM1, Increasing parallelIngestionPipelines to a value larger than 1 is similar to running multiple instances of splunkd with splunktcp inputs on different ports. As a starting point, however, I would leave parallelIngestionPipelines unset or at the default value of 1. splunkd uses a series of queues in a pipeline to process events. Of note: parsingQueue aggQueue typingQueue rulesetQueue indexQueue There are other queues, but these are the most well-documented. See https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774/highlight/true#M103484. I have copies of the printer and high-DPI display friendly PDFs if you need them. On a typical universal forwarder acting as an intermediate forwarder, parsingQueue, which performs minimal event parsing, and indexQueue, which sends events to outputs, are the likely bottlenecks. Your metrics.log event provides a hint: <date time> Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=1217, largest_size=1217,smallest_size=0 Note that metrics.log logs queue names in lower case, but queue names are case-sensitive in configuration files. parsingQueue is blocked because 1217KB is greater than 512KB. The inputs.conf splunktcp stopAcceptorAfterQBlock setting controls what happens to the listener port when a queue is blocked, but you don't need to modify this setting. In your case, I would start by leaving parallelIngestionPipelines at the default value of 1 as noted above and increasing indexQueue to the next highest factor of 128 bytes larger than twice the largest_size value observed for parsingQueue. In %SPLUNK_HOME\etc\systeml\local\server.conf on the intermediate forwarder: [queue=indexQueue] # 2 * 1217KB <= 20 * 128B = 2560KB maxSize = 2560KB (x86-64, ARM64, and SPARC architectures have 64 byte cache lines, but on the off chance you encounter AIX on PowerPC with 128 byte caches lines, for example, you'll avoid buffer alignment performance penalties, closed-source splunkd memory allocation overhead notwithstanding.) Observe metrics.log following the change and keep increasing maxSize until you no longer see instances of blocked=true. If you run out of memory, add more memory to your intermediate forwarder host or consider scaling your intermediate forwarders horizontally with additional hosts. As an alternative, you can start by increasing maxSize for parsingQueue and only increase maxSize for indexQueue if you see blocked=true messages in metrics.log: [queue=parsingQueue] maxSize = 2560KB You can usually find the optimal values through trail and error without resorting to a queue-theoretic analysis. If you find that your system becomes CPU-bound at some maxSize limit, you can increase parallelIngestionPipelines, for example, to N-2, where N is the number of cores available. Following that change, modify maxSize from default values by observing metrics.log. Note that each pipeline consumes as much memory as a single-pipeline splunkd process with the same memory settings.
As others already said that license is generated when you have installed splunk 1st time. After that you have officially the next options. Convert license to free, update it to dev/test (if your comp... See more...
As others already said that license is generated when you have installed splunk 1st time. After that you have officially the next options. Convert license to free, update it to dev/test (if your company have paid Splunk license), install developer license, try to get extension to trial from splunk/partner if you are evaluating it and almost ready to pay official version, pay for official license and the last option is uninstall splunk, remove stuff from disk and then reinstall it.
If you have exactly same error which said that volume is not defined, then check the definition of this volume. There is probably some issues with it.
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like... See more...
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like HFs to those. Then you can query from those by using that group on rest. 
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.... See more...
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.4 as of this writing.   The Upgrade Readiness App detected 1 app with deprecated Python on the my-server instance. config_explorer I have confirmed that in $SPLUNK/bin that the python3.7m executable reports it is version 3.7.17 and I have viewed the jsquery.js file in $SPLUNK/etc/apps/splunk_monitoring_console/src/visualizations/heatmap/node_modules/jquery/dist/. is jQuery JavaScript Library v3.5.0/jsquery.js  which is the only jsquery.js file in the $SPLUNK subdirectories. Why is the Upgrade Readiness App reporting a deprecated Python and why does my Splunk get warnings about Python and JQuery incompatibilities.  
The rest command is processed by the local search head and all search peers, but not by heavy forwarders.  That's the way searches work.  No query is sent to a HF.
Thanks.  Please answer the other three questions.
using wiredtiger
I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the... See more...
I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the server.conf [general] parallelIngestionPipelines = 200   The server immediately allowed over 7000 connections and consumed all 32gb of memory and used a huge amount of networking on my IF for a few minutes before settling to about 6800 connections and 22 -30GB of memory. It is currently receiving about 1.5Gbps and sending ~100Mbps.  Presumably this is burning off the backlog of all the logs that it was not able to forward before.  I am up to 3200 devices reporting and increasing. I am hesitant to say this is fixed and I would like to know if there are any long term issues with keeping the parallelIngestionPipelines = 200.  It is also unclear why a single pipeline is limited to 1000 connections as I have not seen that document anywhere.  
did you figure out what the issue was? I am having the same problem running 9.3. 
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | sta... See more...
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | stats count... How this works: `sort time_val`: Orders your events by the specified time variable. `streamstats count AS row_num`: Adds a sequential count to each event, starting from 1. `where row_num > 1`: Filters out the first event (earliest instance), allowing you to skip the unwanted data point. The rest of your metrics calculations then proceed on the filtered dataset. This method allows you to dynamically exclude the first entry without needing to know its exact characteristics beforehand. Adjust the variable names and the rest of the logic specific to your context as needed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I'm running the following command - | rest /services/server/sysinfo And it shows the indexer and the search head but not the heavy forwarder. What can it be? 
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly t... See more...
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly the first instance of my data, that is, the earliest one, as it's the one associated with the server being started daily and it's not valid for my needs. It's important to note that I don't have any information associated with this first instance before the query runs as its related to a script scheduled to run at a specific time, but it generates new values every time, and it's duration is variable, meaning that I don't know when it has finished. I cannot share information related to the data neither the query exactly, but it's of the form   index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | eval val3=... | stats count...   How could I do this? 
Thank you @gcusello  I will take look  
Thank you @livehybrid livehybrid I will give this try today and let you know the results
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" ... See more...
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" file. splunk-rolling-upgrade/ does not have an "app.manifest" file, it does however, have a  "lib/splunkupgrade/downloader/manifest.py" file. Please, if someone understands this, please elaborate on what two manifest files to compare.
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My... See more...
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My whole query is of the form: index ... | stats ... | eval var1=... | eval var2=... | sort var2 | eval var3=... | bin_time span=1d | stats(count(condition)) as count_var by day But it doesn't seem to work. I've already tried both with one day and with a bigger interval, but they all result in "No results found".  I can guarantee that this query should return results because when I run it for only day without the "bin" command, it gives me the correct answer. What am I doing wrong? Thank you in advance.
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I... See more...
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I found the correct inputs.conf and realized I need to use btool.      Thanks again for sharing!
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in dep... See more...
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in deplyoment-apps, but I'm still seeing that same error on forwarder management:   Forwarder management unavailable   There is an error in your serverclass.conf which is preventing deployment server from initializing. Please see your serverclass.conf.spec for more information.