All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the... See more...
I was trying to play around with the  [general] pipelineSetAutoScale [queue] autoAdjustQueue maxSize and nothing had an effect   Then after some more researching more I added this line to the server.conf [general] parallelIngestionPipelines = 200   The server immediately allowed over 7000 connections and consumed all 32gb of memory and used a huge amount of networking on my IF for a few minutes before settling to about 6800 connections and 22 -30GB of memory. It is currently receiving about 1.5Gbps and sending ~100Mbps.  Presumably this is burning off the backlog of all the logs that it was not able to forward before.  I am up to 3200 devices reporting and increasing. I am hesitant to say this is fixed and I would like to know if there are any long term issues with keeping the parallelIngestionPipelines = 200.  It is also unclear why a single pipeline is limited to 1000 connections as I have not seen that document anywhere.  
did you figure out what the issue was? I am having the same problem running 9.3. 
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | sta... See more...
Hi @pedropiin  You could try the following: index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | streamstats count AS row_num | where row_num > 1 | eval val3=... | stats count... How this works: `sort time_val`: Orders your events by the specified time variable. `streamstats count AS row_num`: Adds a sequential count to each event, starting from 1. `where row_num > 1`: Filters out the first event (earliest instance), allowing you to skip the unwanted data point. The rest of your metrics calculations then proceed on the filtered dataset. This method allows you to dynamically exclude the first entry without needing to know its exact characteristics beforehand. Adjust the variable names and the rest of the logic specific to your context as needed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I'm running the following command - | rest /services/server/sysinfo And it shows the indexer and the search head but not the heavy forwarder. What can it be? 
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly t... See more...
Hi everyone. I'm doing a query in which I sort it by time according to a variable and then calculate some metrics over the data. But I need to calculate these metrics without considering exactly the first instance of my data, that is, the earliest one, as it's the one associated with the server being started daily and it's not valid for my needs. It's important to note that I don't have any information associated with this first instance before the query runs as its related to a script scheduled to run at a specific time, but it generates new values every time, and it's duration is variable, meaning that I don't know when it has finished. I cannot share information related to the data neither the query exactly, but it's of the form   index=... | stats ... | eval val1=... | eval time_val=... | sort time_val | eval val3=... | stats count...   How could I do this? 
Thank you @gcusello  I will take look  
Thank you @livehybrid livehybrid I will give this try today and let you know the results
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" ... See more...
What manifest file?  There exists an "app.manifest" file in several apps but not all of them. The app that is being flagged on my splunk enterprise is "config_explorer" and it has an "app.manifest" file. splunk-rolling-upgrade/ does not have an "app.manifest" file, it does however, have a  "lib/splunkupgrade/downloader/manifest.py" file. Please, if someone understands this, please elaborate on what two manifest files to compare.
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My... See more...
Hi @gcusello. Thank you for your answer But it doesn't seem to work... Unfortunately I can't share information as it is sensitive, but it goes along the line of what you used as an example. My whole query is of the form: index ... | stats ... | eval var1=... | eval var2=... | sort var2 | eval var3=... | bin_time span=1d | stats(count(condition)) as count_var by day But it doesn't seem to work. I've already tried both with one day and with a bigger interval, but they all result in "No results found".  I can guarantee that this query should return results because when I run it for only day without the "bin" command, it gives me the correct answer. What am I doing wrong? Thank you in advance.
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I... See more...
Thank you  @livehybrid , after a long day of mapping related things out , I got the bug a and found myself manually reviewing all the directories I could.  Of course directly after I submitted this I found the correct inputs.conf and realized I need to use btool.      Thanks again for sharing!
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in dep... See more...
I am also seeing this issue. Any updates/solutions you've found? I've tried removing "restartSplunkWeb" from my app stanza declarations and made sure all the apps in the serverclass.conf exist in deplyoment-apps, but I'm still seeing that same error on forwarder management:   Forwarder management unavailable   There is an error in your serverclass.conf which is preventing deployment server from initializing. Please see your serverclass.conf.spec for more information.
Hi, The Splunk Cloud free trial instances don't have the self-service ability to set IP allow lists so you won't be able to configure Log Observer Connect with a free trial stack. If you are working... See more...
Hi, The Splunk Cloud free trial instances don't have the self-service ability to set IP allow lists so you won't be able to configure Log Observer Connect with a free trial stack. If you are working with a Splunk account team, they can get you access to a proof-of-concept stack where this integration would be possible. 
Hi. In my company we have Symantec Endpoint Security (SES) which is in the cloud. I have created a Bearer Token and have made the configurations by symantec, the problem occurs when I need to integ... See more...
Hi. In my company we have Symantec Endpoint Security (SES) which is in the cloud. I have created a Bearer Token and have made the configurations by symantec, the problem occurs when I need to integrate it with Splunk. Someone with experience in Symantec who can help me.
hi, I have just opened code and adjusted and it works).    
I tried installing it in multiple instances and got the same error. Splunk version is 9.2.2
Hi @pedropiin , you have to run something like this:  <your_search> | bin span=1d _time | stats sum(metric1) AS metric1 sum(metric2) AS metric2 sum(metric3) AS metric3 BY day ... See more...
Hi @pedropiin , you have to run something like this:  <your_search> | bin span=1d _time | stats sum(metric1) AS metric1 sum(metric2) AS metric2 sum(metric3) AS metric3 BY day I could be more detailed is you share more information about your data. Ciao. Giuseppe
Here are some internal logs. 02-14-2025 08:57:21.124 -0600 ERROR PersistentScript [3645488 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/lib/python3.7/site-packages/splunk/p... See more...
Here are some internal logs. 02-14-2025 08:57:21.124 -0600 ERROR PersistentScript [3645488 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py}: File "/opt/splunk/etc/apps/TA-virustotal-app/bin/lib/typing_extensions.py", line 1039     02-14-2025 08:57:21.124 -0600 ERROR PersistentScript [3645488 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py}: File "/opt/splunk/etc/apps/TA-virustotal-app/bin/lib/aiohttp/hdrs.py", line 13, in <module>     02-14-2025 08:57:21.124 -0600 ERROR PersistentScript [3645488 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py}: File "/opt/splunk/etc/apps/TA-virustotal-app/bin/lib/aiohttp/__init__.py", line 5, in <module>     02-14-2025 08:57:21.124 -0600 ERROR PersistentScript [3645488 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py}: File "/opt/splunk/etc/apps/TA-virustotal-app/bin/lib/vt/client.py", line 23, in <module>
Hi everyone. I have a query that calculates a number of metrics, such as average, max value, etc, for a specific date, given an interval from a dropdown menu. All the metrics are then output in a ... See more...
Hi everyone. I have a query that calculates a number of metrics, such as average, max value, etc, for a specific date, given an interval from a dropdown menu. All the metrics are then output in a table, in which a row represents one day and the columns are the metrics itself. I need to apply the same query to a number of days given an interval and output the result of each day as a new row on the column. For example, if the user queries through the past 5 days, I need five rows, each with the metrics associated only to the data from that day. How could I do this?
Hi @Real_captain , please try this: index = xyz source = db (TERM(A) OR TERM(B) OR TERM(C) OR TERM(D) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABE... See more...
Hi @Real_captain , please try this: index = xyz source = db (TERM(A) OR TERM(B) OR TERM(C) OR TERM(D) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABEND%"), "ABEND" , like(TEXT, "%ENDED - TIME%"), "ENDED" , like(TEXT, "%STARTED - TIME%"), "STARTED") | eval DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") , {Function}_TIME=_time | rename DAT as Date_of_reception | eval Application = case ( JOBNAME IN ( "A" ,"B") , "A1" , JOBNAME IN ( "C" , "D" ) , "A2" ) | eval STARTED_TIME=strptime(STARTED_TIME,"%d/%m/%Y"), ENDED_TIME=strptime(ENDED_TIME,"%d/%m/%Y") | stats earliest(eval(if(job="A",STARTED_TIME,""))) AS STARTED_TIME latest(eval(if(job="B",ENDED_TIME,""))) AS ENDED_TIME BY Application Date_of_reception | fillnull ENDED_TIME value="Job B not started" | eval Execution_Time=if(isnull(ENDED_TIME),"Job B not started",ENDED_TIME-STARTED_TIME) | eval STARTED_TIME=strftime(STARTED_TIME,"%d/%m/%Y"), ENDED_TIME=if(isnull(ENDED_TIME),"Job B not started",strftime(ENDED_TIME,"%d/%m/%Y") | table Application Date_of_reception STARTED_TIME ENDED_TIME Execution_Time Ciao. Giuseppe
Yes. you are right.  But when the Job A is completed, and JOB B is still not started, then Application Start Time = Start time of the JOB A and  Application End Time = End time of the JOB A I... See more...
Yes. you are right.  But when the Job A is completed, and JOB B is still not started, then Application Start Time = Start time of the JOB A and  Application End Time = End time of the JOB A If JOB2 is not started, then Application End Time should be NULL as we are checking the end time of Job B.  Thats why i want to do below :  Application Start Time of A1 = Start time of the JOB A and  Application End Time A1 = End time of the JOB B Application Start Time of A2 = Start time of the JOB C and  Application End Time A2 = End time of the JOB D