All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you have exactly same error which said that volume is not defined, then check the definition of this volume. There is probably some issues with it.
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like... See more...
If you want to see also those HFs you must to add those as a search peers to your SH. Usually this shouldn’t do any other SHs than MC. In MC just add those as indexers and add separate own group like HFs to those. Then you can query from those by using that group on rest. 
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.... See more...
This is Splunk Enterprise on-premise version  9.2.4 and the Config Explorer is version 1.7.16.  The splunkbase reports that Config Explorer 1.7.16 is compatible with all the Splunk 9 versions, 9.0-9.4 as of this writing.   The Upgrade Readiness App detected 1 app with deprecated Python on the my-server instance. config_explorer I have confirmed that in $SPLUNK/bin that the python3.7m executable reports it is version 3.7.17 and I have viewed the jsquery.js file in $SPLUNK/etc/apps/splunk_monitoring_console/src/visualizations/heatmap/node_modules/jquery/dist/. is jQuery JavaScript Library v3.5.0/jsquery.js  which is the only jsquery.js file in the $SPLUNK subdirectories. Why is the Upgrade Readiness App reporting a deprecated Python and why does my Splunk get warnings about Python and JQuery incompatibilities.  
The rest command is processed by the local search head and all search peers, but not by heavy forwarders.  That's the way searches work.  No query is sent to a HF.
Thanks.  Please answer the other three questions.
using wiredtiger
I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the... See more...
I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the server.conf [general] parallelIngestionPipelines = 200   The server immediately allowed over 7000 connections and consumed all 32gb of memory and used a huge amount of networking on my IF for a few minutes before settling to about 6800 connections and 22 -30GB of memory. It is currently receiving about 1.5Gbps and sending ~100Mbps.  Presumably this is burning off the backlog of all the logs that it was not able to forward before.  I am up to 3200 devices reporting and increasing. I am hesitant to say this is fixed and I would like to know if there are any long term issues with keeping the parallelIngestionPipelines = 200.  It is also unclear why a single pipeline is limited to 1000 connections as I have not seen that document anywhere.  
did you figure out what the issue was? I am having the same problem running 9.3. 
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | sta... See more...
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | stats count... How this works: `sort time_val`: Orders your events by the specified time variable. `streamstats count AS row_num`: Adds a sequential count to each event, starting from 1. `where row_num > 1`: Filters out the first event (earliest instance), allowing you to skip the unwanted data point. The rest of your metrics calculations then proceed on the filtered dataset. This method allows you to dynamically exclude the first entry without needing to know its exact characteristics beforehand. Adjust the variable names and the rest of the logic specific to your context as needed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I'm running the following command - | rest /services/server/sysinfo And it shows the indexer and the search head but not the heavy forwarder. What can it be? 
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly t... See more...
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly the first instance of my data, that is, the earliest one, as it's the one associated with the server being started daily and it's not valid for my needs. It's important to note that I don't have any information associated with this first instance before the query runs as its related to a script scheduled to run at a specific time, but it generates new values every time, and it's duration is variable, meaning that I don't know when it has finished. I cannot share information related to the data neither the query exactly, but it's of the form   index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | eval val3=... | stats count...   How could I do this? 
Thank you @gcusello  I will take look  
Thank you @livehybrid livehybrid I will give this try today and let you know the results
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" ... See more...
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" file. splunk-rolling-upgrade/ does not have an "app.manifest" file, it does however, have a  "lib/splunkupgrade/downloader/manifest.py" file. Please, if someone understands this, please elaborate on what two manifest files to compare.
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My... See more...
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My whole query is of the form: index ... | stats ... | eval var1=... | eval var2=... | sort var2 | eval var3=... | bin_time span=1d | stats(count(condition)) as count_var by day But it doesn't seem to work. I've already tried both with one day and with a bigger interval, but they all result in "No results found".  I can guarantee that this query should return results because when I run it for only day without the "bin" command, it gives me the correct answer. What am I doing wrong? Thank you in advance.
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I... See more...
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I found the correct inputs.conf and realized I need to use btool.      Thanks again for sharing!
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in dep... See more...
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in deplyoment-apps, but I'm still seeing that same error on forwarder management:   Forwarder management unavailable   There is an error in your serverclass.conf which is preventing deployment server from initializing. Please see your serverclass.conf.spec for more information.
Hi, The Splunk Cloud free trial instances don't have the self-service ability to set IP allow lists so you won't be able to configure Log Observer Connect with a free trial stack. If you are working... See more...
Hi, The Splunk Cloud free trial instances don't have the self-service ability to set IP allow lists so you won't be able to configure Log Observer Connect with a free trial stack. If you are working with a Splunk account team, they can get you access to a proof-of-concept stack where this integration would be possible. 
Hi. In my company we have Symantec Endpoint Security (SES) which is in the cloud. I have created a Bearer Token and have made the configurations by symantec, the problem occurs when I need to integ... See more...
Hi. In my company we have Symantec Endpoint Security (SES) which is in the cloud. I have created a Bearer Token and have made the configurations by symantec, the problem occurs when I need to integrate it with Splunk. Someone with experience in Symantec who can help me.
hi, I have just opened code and adjusted and it works).