All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If my clustered Splunk cloud indexers contain data that has been deleted (marked for deletion) and that index hits its retention period for DDAA archiving, will the data marked delete be archived to ... See more...
If my clustered Splunk cloud indexers contain data that has been deleted (marked for deletion) and that index hits its retention period for DDAA archiving, will the data marked delete be archived to DDAA storage or will it be dropped at this point. I prefer dropped is there is a configuration option for this.  Thanks
I have been editing the following query in order to get the duration for workload.....how ever while creating the visualization we only see duration of the bar chart......i am looking to see if we ca... See more...
I have been editing the following query in order to get the duration for workload.....how ever while creating the visualization we only see duration of the bar chart......i am looking to see if we can get something with regards to the workload and/0r Jobname. | inputlookup cyclestarttimes.csv | lookup cycleendtimes.csv CYCLE WORKLOAD | search WORKLOAD=F91 | where isnotnull(ENDTIME) | eval STARTC = "2021-08-19" | eval ENDC = "2021-08-20" | eval STARTC = strptime(STARTC, "%Y-%m-%d") | eval ENDC = strptime(ENDC, "%Y-%m-%d") | eval CYCLEC = strptime(CYCLE, "%Y-%m-%d") | where CYCLEC >= STARTC AND CYCLEC <= ENDC | stats values(STARTTIME) as START values(ENDTIME) as STOP by WORKLOAD CYCLE | eval _time = strptime(START, "%Y-%m-%d %H:%M:%S") | eval end_time = strptime(STOP, "%Y-%m-%d %H:%M:%S") | eval duration = (end_time - _time) * 1000 | eval JOBNAME = WORKLOAD | stats count by _time,duration, WORKLOAD, JOBNAME | table _time WORKLOAD JOBNAME duration
We have onboarded Alicloud data in Splunk and looking for use cases creation.   Is there any ALicloud use cases doc for Splunk, What use cases we can created any reference link?    
Hi there  New here to using Splunk, we are looking to use Splunk Universal Forwarder to forward windows event logs to a splunk server.  I have installed the forwarder on a win10 client and i can se... See more...
Hi there  New here to using Splunk, we are looking to use Splunk Universal Forwarder to forward windows event logs to a splunk server.  I have installed the forwarder on a win10 client and i can see events coming into Splunk which is great!  Is there any way that i can tweek the Universal Forwarder on the client PC to not forward some events such as Information logs, Audit Success, and possibly stop forwarding all the text from the event such as the description and all that?   Trying to be as lean as possible with these events really.    Thanks
I want to know how I can incrementally go through and add missing times (hours) per user across a number of users.   Also the fail_num for those times should be '0'.    I've thought of using fore... See more...
I want to know how I can incrementally go through and add missing times (hours) per user across a number of users.   Also the fail_num for those times should be '0'.    I've thought of using foreach, but I'm not sure this is the route to go here.   Above is a result showing only 2 users from the query I'm making in one of my previous posts ("Detecting-Spikes-Anomalies-in-Failed-Logins-over-time"). For cases where the fail_num was 0 no entry was made and therefore I have no row for that timeslot.  When I use trendline to analyze this it won't work because it doesn't have enough points of data to compute the moving average.  I cannot use timechart to do this.   So, if there is a more programmatic way to add rows that are missing in by using regex or some more efficient method, then someone please enlighten me on this. I can't use timechart to fill these in because it breaks other things in my query.   I've thought of using foreach, but I'm not sure this is the route to go here. If you're wondering why I can't use timechart it makes it impossible for me to do the analysis unless 3d dimensional analysis or something exists in Splunk...sadly I'm not mathematically or programmatically gifted enough to think up such a solution.
Hi, I am working at a corporation,  using Splunk on my browsers.  I have installed the windows forwarder and configured my user name and the Splunk server (client side).  I used the URL and the port... See more...
Hi, I am working at a corporation,  using Splunk on my browsers.  I have installed the windows forwarder and configured my user name and the Splunk server (client side).  I used the URL and the port number from the browser URL of the corporate Splunk server.. Now I want to be able to send lab data to Splunk, I don't want to monitor anything on the windows system.  The PC is just a means to run a script that can collect data from some instruments. I was thinking the forwarder would allow me to then use some types of commands within my script (Python) to send data.  I cold write to a file, but would prefer to send data live with some kind of command. How can this be done, is there a specific documentation for this type of activity? Maybe I need to write to a file, let the forwarder monitor that file, and continuously overwrite that file, assuming the forwarder would look at the like on some sort of periodic basis, like 20s or 60s. I would appreciate any general guidance on this, especially if there is documentation to use
Hi, I have the bellow search: I am trying to use acceleration reporting however because the eventstats I can't, I have tried to rewrite the search however it does not work, could someone please hel... See more...
Hi, I have the bellow search: I am trying to use acceleration reporting however because the eventstats I can't, I have tried to rewrite the search however it does not work, could someone please help me?   index=test sourcetype=test | eval ResponseTime=round(response_time/1000,2) | eventstats perc99(ResponseTime) as p99Resp | eventstats perc90(ResponseTime) as p90Resp |eventstats perc75(ResponseTime) as p75Resp | eval p99Unit=if(ResponseTime<=p99Resp,0,1) | eval p00Response=ResponseTime | eval p98Response=if(ResponseTime<=p99Resp,ResponseTime,null()) | eval p99Response=if(ResponseTime<=p99Resp,null(),ResponseTime) | eval p90Unit=if(ResponseTime<=p90Resp,0,1) | eval p90Response=if(ResponseTime<=p90Resp,ResponseTime,null()) | eval p90Response=if(ResponseTime<=p90Resp,null(),ResponseTime) | eval p75Unit=if(ResponseTime<=p75Resp,0,1) | eval p75Response=if(ResponseTime<=p75Resp,ResponseTime,null()) | eval p75Response=if(ResponseTime<=p75Resp,null(),ResponseTime) | stats sum(p99Unit) as P99Count, avg(p99Response) as p99ResponseAvg, min(p99Response) as p99ResponseMin, max(p99Response) as p99ResponseMax sum(p90Unit) as P90Count, avg(p90Response) as p90ResponseAvg, min(p90Response) as p90ResponseMin, max(p90Response) as p90ResponseMax sum(p75Unit) as P75Count, avg(p75Response) as p75ResponseAvg, min(p75Response) as p75ResponseMin, max(p75Response) as p75ResponseMax | rename P99Count as "99% Total Count" | rename p99ResponseAvg as "99% AVG" | rename p99ResponseMin as "99% Min Response Time" | rename p99ResponseMax as "99% Max Response Time" | rename P90Count as "90% Total Count" | rename p90ResponseAvg as "90% AVG" | rename p90ResponseMin as "90% Min Response Time" | rename p90ResponseMax as "90% Max Response Time" | rename P75Count as "75% Total Count" | rename p75ResponseAvg as "75% AVG" | rename p75ResponseMin as "75% Min Response Time" | rename p75ResponseMax as "75% Max Response Time"   Thanks Joe
Hi, I have the bellow search: I am trying to use acceleration reporting however because the event stats I can't, I have tried to rewrite the search however it does not work, could someone please he... See more...
Hi, I have the bellow search: I am trying to use acceleration reporting however because the event stats I can't, I have tried to rewrite the search however it does not work, could someone please help me?   index=test sourcetype=test | eval ResponseTime=round(response_time/1000,2) | eventstats perc99(ResponseTime) as p99Resp | eventstats perc90(ResponseTime) as p90Resp |eventstats perc75(ResponseTime) as p75Resp | eval p99Unit=if(ResponseTime<=p99Resp,0,1) | eval p00Response=ResponseTime | eval p98Response=if(ResponseTime<=p99Resp,ResponseTime,null()) | eval p99Response=if(ResponseTime<=p99Resp,null(),ResponseTime) | eval p90Unit=if(ResponseTime<=p90Resp,0,1) | eval p90Response=if(ResponseTime<=p90Resp,ResponseTime,null()) | eval p90Response=if(ResponseTime<=p90Resp,null(),ResponseTime) | eval p75Unit=if(ResponseTime<=p75Resp,0,1) | eval p75Response=if(ResponseTime<=p75Resp,ResponseTime,null()) | eval p75Response=if(ResponseTime<=p75Resp,null(),ResponseTime) | stats sum(p99Unit) as P99Count, avg(p99Response) as p99ResponseAvg, min(p99Response) as p99ResponseMin, max(p99Response) as p99ResponseMax sum(p90Unit) as P90Count, avg(p90Response) as p90ResponseAvg, min(p90Response) as p90ResponseMin, max(p90Response) as p90ResponseMax sum(p75Unit) as P75Count, avg(p75Response) as p75ResponseAvg, min(p75Response) as p75ResponseMin, max(p75Response) as p75ResponseMax | rename P99Count as "99% Total Count" | rename p99ResponseAvg as "99% AVG" | rename p99ResponseMin as "99% Min Response Time" | rename p99ResponseMax as "99% Max Response Time" | rename P90Count as "90% Total Count" | rename p90ResponseAvg as "90% AVG" | rename p90ResponseMin as "90% Min Response Time" | rename p90ResponseMax as "90% Max Response Time" | rename P75Count as "75% Total Count" | rename p75ResponseAvg as "75% AVG" | rename p75ResponseMin as "75% Min Response Time" | rename p75ResponseMax as "75% Max Response Time"   Thanks Joe
Hi, I have TCP 514 logs in the same sourcetype. There are different formats of timestamp in log and even in events. I don't understand my mistakes with datetime.xml. It's working for one format bu... See more...
Hi, I have TCP 514 logs in the same sourcetype. There are different formats of timestamp in log and even in events. I don't understand my mistakes with datetime.xml. It's working for one format but not for the second. I text regexp with search ( | rex field=_raw ".........") fields are correctly extracted. I follow thus tuto: https://www.function1.com/2013/01/oh-no-splunking-log-files-with-multiple-formats-no-problem Thanks for your help.   Example: first log: <111> YYYY-MM-DDTHH:MM:SS+02:00 localhost house 12154 - @ip [DD/LitMM/YYYY:HH:MM:SS.MS] ........... _time is correctly extract, second log: <145> YYYY-MM-DDTHH:MM:SS+02:00 localhost foo - - YYYY-MM-DDTHH:MM:SS.MS+0000 jizjfoziejfz battle: cececeijoijoi [YYYY-MM-DDTHH:MM:SS.MS+0000] ........... _time is not extracted, value is index time   I'm on a standalone station, so i copy regexp without storage (maybe typo). Configuration: in datetime.xml on HeayFW (etc/apps/test/default) <define name="_house" extract="day, litmonth,year,hour,minute,second,subsecond"> <text>house.*\[(\d{2})/(\w{3})/(\d{4}):(\d{2}):(\d{2}):(\d{2})\.\d+\]></text> </define> <define name="_battle" extract="year,month,day,hour,minute,second,subsecond"> <text>battle.*\[(\d{4})\-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})\.\d+\+\d{4}\]></text> </define> <timePatterns> <use name="_house"/> <use name="_battle"/> </timePatterns> <datePatterns> <use name="_house"/> <use name="_battle"/> </datePatterns> </datetime> in props.conf [my_sourcetype] DATETIME_CONGIG= /etc/apps/test/defaults/datetime.xml LINE_BREAKER = ([\r|\n])+ SHOULD_LINEMERGE = false  
Hello. I have a set of hosts which send some stats. In my case these are rsyslog impstats statistics but it can be anything - for example SNMP interface counters. The point is that I have a counter... See more...
Hello. I have a set of hosts which send some stats. In my case these are rsyslog impstats statistics but it can be anything - for example SNMP interface counters. The point is that I have a counter which increases with time and I want to compute incremental statistics. Yes, I know you'll point me towards delta command but it can only count difference from one even to another and I have several different sources for which I need separate stat. (let's say something like | delta <parameter> by host - unfortunately there's no such command ;-)). After some poking around it seems that range() statistical function seems to fit nicely - it calculates - as the name implies - a range  between lowest and highest  value of the given field so if I pair it with timechart it works beautifuly. Almost. The problem is that the counters have finite length and after some time overflow back to 0.  And if this happens... of course the range() command returns some ridiculous values. If it was a simple delta calculation, I'd probably just do some modulo operation or some other conditional eval to account for it but I don't see a reasonable way to do it with already summed up  values since even the field names of the summary table are variable and depend on host names and I can't know the list of hosts beforehand. Is there any reasonable way to filter out the "overflowed" values? Just using "outliers" removes also "bottom" values which is not what I need.
I'm using the following to eval current_day: | inputlookup Files_And_Thresholds | eval current_day=lower(strftime(relative_time(now(),"@s"),"%A")) I have a column in a lookup file (.csv) with days... See more...
I'm using the following to eval current_day: | inputlookup Files_And_Thresholds | eval current_day=lower(strftime(relative_time(now(),"@s"),"%A")) I have a column in a lookup file (.csv) with days '"file_days" I would like to search across, I can not figure out why this will not search?  If I replace current_day with the string "tuesday" it works fine? | makemv delim=" " file_days | search file_days=current_day lookup table: file_cutoff_time file_days file_name 23:00:00 thursday wednesday FILE001.CSV 22:00:00 friday monday thursday tuesday wednesday FILE002.CSV
Hi, The issue is that some servers with universal forwarder agent deployed on them are not being able to successfully download the apps from the deployment server.  Environment Details: Server: Li... See more...
Hi, The issue is that some servers with universal forwarder agent deployed on them are not being able to successfully download the apps from the deployment server.  Environment Details: Server: Linux RHEL 7.9 (3.x Kernel) Deployment Server: Splunk Enterprise 8.x Splunk Universal Forwarder: 8.2.2 for Linux The agent is successfully installed and connected to the deployment server using the below command ./splunk set deploy-poll depoloyment-server:8089 And it is showing up successfully on the deployment server as well however when I push apps to the server via the deployment server they aren't successfully downloaded.  From the universal forwarder splunkd.log,  ERROR HttpClientRequest *** - HTTP client error=Connection closed by peer while accessing server=*** for request=*** From the deployment server splunkd.log, What can be the possible reason for this behavior? Since the communication seems fine (we've opened uni-directional communication from server to deployment-server on port 8089).  Kind regards
  So I have added a table drilldown to this pie chart but I need the rows in table displayed according to the value I have clicked on this pie chart. For example if I click on "production" slice... See more...
  So I have added a table drilldown to this pie chart but I need the rows in table displayed according to the value I have clicked on this pie chart. For example if I click on "production" slice of pie , only Production values should show in table. How can I do this?
Hi, I need help with cron expression for the alert so that it should not trigger the alert during the following time interval.   Monday -to- Friday : 9:30 AM to 2:00 PM  Saturday - Sunday : From... See more...
Hi, I need help with cron expression for the alert so that it should not trigger the alert during the following time interval.   Monday -to- Friday : 9:30 AM to 2:00 PM  Saturday - Sunday : From 9:30 AM (whole day of saturday ) to Sunday 2:00 PM  Thank you  
Hello Experts, Requirement is to show the no. of jobs started, completed in last 4 hours. I have injested job log files to splunk. From file name, I can derive the job start time and the first line... See more...
Hello Experts, Requirement is to show the no. of jobs started, completed in last 4 hours. I have injested job log files to splunk. From file name, I can derive the job start time and the first line of the job is always "Job xxx started". With this I can count the no. of jobs that started in an hour.  I tried extracting the info by searching the last line of the job log, which is "Job xxx completed successfully" and since there is some delays in data ingestion to Splunk, previous hours data are showing in the table thereby the table shows 5 hours count instead of 4 hrs. Now to identify the no. of jobs that got completed successfully which is started in the hour, I tried to query with AND command, append command unsuccessfully. Criteria is to show the count of the job that got started and completed in a 4 hr time span. I hope we can use AND or subsearch command. Kindly help with this requirement. Regards, Karthikeyan.SV
Hi, I am using Universal Forwarder  on a Mac configured to monitor a few log files. It is sending data fine, and it resumes sending data from those files after a disruption of the network. The thin... See more...
Hi, I am using Universal Forwarder  on a Mac configured to monitor a few log files. It is sending data fine, and it resumes sending data from those files after a disruption of the network. The thing is, it is not sending the data written to the log files while the internet was off. Maybe it is caching the data elsewhere and not sending it?  Reading the documentation, I see that there is no persistent queue for the monitor input. Does that mean that the forward won't pause the parsing of a log file when it can't reach the server?
I have tried two input modes: monitor and tcp. When I use the monitor mode and read text files, the data sending from the Universal Forwarder resumes in case the network connectivity gets lost. Howe... See more...
I have tried two input modes: monitor and tcp. When I use the monitor mode and read text files, the data sending from the Universal Forwarder resumes in case the network connectivity gets lost. However, when I use tcp as an input and a persistent queue, I see that the queue grows while there is no connectivity (for example, if I turn wifi off). When turning the connection on again, the persistent queue remains growing and no data is actually sent to the server. I have to restart Splunk so that the sending resumes. The restarting takes a few minutes - not the case with the monitor mode - and when it finally restarts, the persistent queue is erased and the data that was saved there doesn't get sent.  Is there a major bug with the universal forwarder?
Hello I have csv file with host names also, i have this query : sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" this search returns the host name. how can i search withi... See more...
Hello I have csv file with host names also, i have this query : sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" this search returns the host name. how can i search within the hosts in the csv file so only the ones from the file will return in my global search ? thanks 
Hi, I need a help in creating a field using/grouping sum of 2 existing fields . Ex: field 1- count_of_true(These will have independent counts for each services) fields 2 - count_of_false(These wi... See more...
Hi, I need a help in creating a field using/grouping sum of 2 existing fields . Ex: field 1- count_of_true(These will have independent counts for each services) fields 2 - count_of_false(These will have independent counts for each services) I am looking for a fields status which has sum(count_of_true)  as true & sum(count_of_false) as false as below after a stats like( |stats count by status) Status   count true        212 false     313 I tried using transpose ,but the stats gives unexpected value ,      
Hi All, I am seeing a strange issue where occaisionally one of my alerts stop working ( not always the same one ). When this issue is happening I can see the searches running but there are no trigge... See more...
Hi All, I am seeing a strange issue where occaisionally one of my alerts stop working ( not always the same one ). When this issue is happening I can see the searches running but there are no triggers happening for the alert even when manually running the search finds the events. I have tweaked the searches to make sure I am not falling foul of the _indextime vs _time issue caused by alerts arriving outside the search window. It appears that the search just stops triggering and it starts again when I Disable/Enable the search. Anyone else seeing this or have any ideas?