All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. I have a set of hosts which send some stats. In my case these are rsyslog impstats statistics but it can be anything - for example SNMP interface counters. The point is that I have a counter... See more...
Hello. I have a set of hosts which send some stats. In my case these are rsyslog impstats statistics but it can be anything - for example SNMP interface counters. The point is that I have a counter which increases with time and I want to compute incremental statistics. Yes, I know you'll point me towards delta command but it can only count difference from one even to another and I have several different sources for which I need separate stat. (let's say something like | delta <parameter> by host - unfortunately there's no such command ;-)). After some poking around it seems that range() statistical function seems to fit nicely - it calculates - as the name implies - a range  between lowest and highest  value of the given field so if I pair it with timechart it works beautifuly. Almost. The problem is that the counters have finite length and after some time overflow back to 0.  And if this happens... of course the range() command returns some ridiculous values. If it was a simple delta calculation, I'd probably just do some modulo operation or some other conditional eval to account for it but I don't see a reasonable way to do it with already summed up  values since even the field names of the summary table are variable and depend on host names and I can't know the list of hosts beforehand. Is there any reasonable way to filter out the "overflowed" values? Just using "outliers" removes also "bottom" values which is not what I need.
I'm using the following to eval current_day: | inputlookup Files_And_Thresholds | eval current_day=lower(strftime(relative_time(now(),"@s"),"%A")) I have a column in a lookup file (.csv) with days... See more...
I'm using the following to eval current_day: | inputlookup Files_And_Thresholds | eval current_day=lower(strftime(relative_time(now(),"@s"),"%A")) I have a column in a lookup file (.csv) with days '"file_days" I would like to search across, I can not figure out why this will not search?  If I replace current_day with the string "tuesday" it works fine? | makemv delim=" " file_days | search file_days=current_day lookup table: file_cutoff_time file_days file_name 23:00:00 thursday wednesday FILE001.CSV 22:00:00 friday monday thursday tuesday wednesday FILE002.CSV
Hi, The issue is that some servers with universal forwarder agent deployed on them are not being able to successfully download the apps from the deployment server.  Environment Details: Server: Li... See more...
Hi, The issue is that some servers with universal forwarder agent deployed on them are not being able to successfully download the apps from the deployment server.  Environment Details: Server: Linux RHEL 7.9 (3.x Kernel) Deployment Server: Splunk Enterprise 8.x Splunk Universal Forwarder: 8.2.2 for Linux The agent is successfully installed and connected to the deployment server using the below command ./splunk set deploy-poll depoloyment-server:8089 And it is showing up successfully on the deployment server as well however when I push apps to the server via the deployment server they aren't successfully downloaded.  From the universal forwarder splunkd.log,  ERROR HttpClientRequest *** - HTTP client error=Connection closed by peer while accessing server=*** for request=*** From the deployment server splunkd.log, What can be the possible reason for this behavior? Since the communication seems fine (we've opened uni-directional communication from server to deployment-server on port 8089).  Kind regards
  So I have added a table drilldown to this pie chart but I need the rows in table displayed according to the value I have clicked on this pie chart. For example if I click on "production" slice... See more...
  So I have added a table drilldown to this pie chart but I need the rows in table displayed according to the value I have clicked on this pie chart. For example if I click on "production" slice of pie , only Production values should show in table. How can I do this?
Hi, I need help with cron expression for the alert so that it should not trigger the alert during the following time interval.   Monday -to- Friday : 9:30 AM to 2:00 PM  Saturday - Sunday : From... See more...
Hi, I need help with cron expression for the alert so that it should not trigger the alert during the following time interval.   Monday -to- Friday : 9:30 AM to 2:00 PM  Saturday - Sunday : From 9:30 AM (whole day of saturday ) to Sunday 2:00 PM  Thank you  
Hello Experts, Requirement is to show the no. of jobs started, completed in last 4 hours. I have injested job log files to splunk. From file name, I can derive the job start time and the first line... See more...
Hello Experts, Requirement is to show the no. of jobs started, completed in last 4 hours. I have injested job log files to splunk. From file name, I can derive the job start time and the first line of the job is always "Job xxx started". With this I can count the no. of jobs that started in an hour.  I tried extracting the info by searching the last line of the job log, which is "Job xxx completed successfully" and since there is some delays in data ingestion to Splunk, previous hours data are showing in the table thereby the table shows 5 hours count instead of 4 hrs. Now to identify the no. of jobs that got completed successfully which is started in the hour, I tried to query with AND command, append command unsuccessfully. Criteria is to show the count of the job that got started and completed in a 4 hr time span. I hope we can use AND or subsearch command. Kindly help with this requirement. Regards, Karthikeyan.SV
Hi, I am using Universal Forwarder  on a Mac configured to monitor a few log files. It is sending data fine, and it resumes sending data from those files after a disruption of the network. The thin... See more...
Hi, I am using Universal Forwarder  on a Mac configured to monitor a few log files. It is sending data fine, and it resumes sending data from those files after a disruption of the network. The thing is, it is not sending the data written to the log files while the internet was off. Maybe it is caching the data elsewhere and not sending it?  Reading the documentation, I see that there is no persistent queue for the monitor input. Does that mean that the forward won't pause the parsing of a log file when it can't reach the server?
I have tried two input modes: monitor and tcp. When I use the monitor mode and read text files, the data sending from the Universal Forwarder resumes in case the network connectivity gets lost. Howe... See more...
I have tried two input modes: monitor and tcp. When I use the monitor mode and read text files, the data sending from the Universal Forwarder resumes in case the network connectivity gets lost. However, when I use tcp as an input and a persistent queue, I see that the queue grows while there is no connectivity (for example, if I turn wifi off). When turning the connection on again, the persistent queue remains growing and no data is actually sent to the server. I have to restart Splunk so that the sending resumes. The restarting takes a few minutes - not the case with the monitor mode - and when it finally restarts, the persistent queue is erased and the data that was saved there doesn't get sent.  Is there a major bug with the universal forwarder?
Hello I have csv file with host names also, i have this query : sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" this search returns the host name. how can i search withi... See more...
Hello I have csv file with host names also, i have this query : sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" this search returns the host name. how can i search within the hosts in the csv file so only the ones from the file will return in my global search ? thanks 
Hi, I need a help in creating a field using/grouping sum of 2 existing fields . Ex: field 1- count_of_true(These will have independent counts for each services) fields 2 - count_of_false(These wi... See more...
Hi, I need a help in creating a field using/grouping sum of 2 existing fields . Ex: field 1- count_of_true(These will have independent counts for each services) fields 2 - count_of_false(These will have independent counts for each services) I am looking for a fields status which has sum(count_of_true)  as true & sum(count_of_false) as false as below after a stats like( |stats count by status) Status   count true        212 false     313 I tried using transpose ,but the stats gives unexpected value ,      
Hi All, I am seeing a strange issue where occaisionally one of my alerts stop working ( not always the same one ). When this issue is happening I can see the searches running but there are no trigge... See more...
Hi All, I am seeing a strange issue where occaisionally one of my alerts stop working ( not always the same one ). When this issue is happening I can see the searches running but there are no triggers happening for the alert even when manually running the search finds the events. I have tweaked the searches to make sure I am not falling foul of the _indextime vs _time issue caused by alerts arriving outside the search window. It appears that the search just stops triggering and it starts again when I Disable/Enable the search. Anyone else seeing this or have any ideas?
Hello, we have created many custom correlation searches in our client's deployed instance. Right now they are creating too many notable events even with the "window limitation". Can somebody help?
Hey Team, I'm looking to Ingest Microsoft unified labeling logs into Splunk. MSFT unified labeling is an Azure AIP based app. Any kind of help/info will be helpful.
Hi team, I have below data in splunk.   And I want to get the time duration when below range. ACT start with "AUTOSAVEFORM_trigReq_AutoSaveForm", and end with "AUTOSAVEFORM_after_sendReques" ... See more...
Hi team, I have below data in splunk.   And I want to get the time duration when below range. ACT start with "AUTOSAVEFORM_trigReq_AutoSaveForm", and end with "AUTOSAVEFORM_after_sendReques" I have tried below queries , but it doesn;t return the correct result. index=*bizx_application AND sourcetype=perf_log_bizx AND PID="PM_REVIEW" AND PLV=EVENT AND ACT="AUTOSAVEFORM_*"  AND C_ACTV="*commentEdit*" OR ACT="*SendRequest" |reverse | transaction CMID SID UID startswith="AUTOSAVEFORM_trigReq_AutoSaveForm" endswith="AUTOSAVEFORM_after_sendRequest" | table _time duration eventcount   Can anyone pease help provide a solution?  
Hi, i am trying the App "Lookup File Editor" and have problems with the match_type settings. Normal lookups can be configured to use match_type=CIDR or something else. I cant find similar settings i... See more...
Hi, i am trying the App "Lookup File Editor" and have problems with the match_type settings. Normal lookups can be configured to use match_type=CIDR or something else. I cant find similar settings in the "Lookup File Editor" app from splunkbase. Do i something wrong or is this feature not included?   Thanks  
This is the table. How can I group together similar names into one entry and the count is added for both of them. For example 5-Mock Activity and 6-Mock activity should come in 1 row as "Mock Act... See more...
This is the table. How can I group together similar names into one entry and the count is added for both of them. For example 5-Mock Activity and 6-Mock activity should come in 1 row as "Mock Activity" and count for that field should be 19+5 i.e. 24  
Hi, We have requirement to send alert to our Teams channel, I have tested both the Splunk Teams AddOn and a general Webhook, none of them can send out the alert. Can any one please help on this? Thank... See more...
Hi, We have requirement to send alert to our Teams channel, I have tested both the Splunk Teams AddOn and a general Webhook, none of them can send out the alert. Can any one please help on this? Thanks Xin
We’re running Splunk 8.1.2 on RHEL 8.x and are using some dashboards that makes use of a lookup file “itsp_compliance_settings.csv” with an exemple below   host_environment,title,setting,must,value... See more...
We’re running Splunk 8.1.2 on RHEL 8.x and are using some dashboards that makes use of a lookup file “itsp_compliance_settings.csv” with an exemple below   host_environment,title,setting,must,value … Production,IP default-gateway,default_gateway,equal,1.2.3.4 Production,IP default-gateway,default_gateway,equal,5.6.7.9 …     This is an extract of the search behind the dashboard using the above lookup   index="cisco_ios_config" sourcetype="ApplianceConfigurations:Cisco:IOS" | dedup host | fields - tag, -_raw, - tag::eventtype | rex field=source "\/usr\/local\/rancid\/var\/(?<host_environment>\w+)\/configs\/" | rex field=source "\/usr\/local\/rancid\/var\/\w+\/configs\/\w+-\w+-(?<extra_host_environment_check>\w+)-" | lookup ITSP:Compliance_Settings host_environment | eval zip=mvzip(title, setting, "||") | eval zip=mvzip(zip, must, "||") | eval zip=mvzip(zip, value, "||") | mvexpand zip | makemv delim="||" zip | eval title=mvindex(zip,0) | eval setting=mvindex(zip,1) | eval must=mvindex(zip,2) | eval value=mvindex(zip,3) | foreach * [ eval field=if("<<FIELD>>"==setting,<<MATCHSTR>>,field)] | fillnull value="Setting not found" field | mvexpand field | eval fail=if(trim(field)==trim(value),if(must=="equal",0,1),if(must=="equal",1,0)) | stats sum(fail) AS "Count" by title | rename title AS "Setting" | eval Status=if(Count > 0, "error", "ok")   Can someone please help and tell me if this is possible to adapt the search to take into account more than 1 possible values (2 default gateways are both valid) in the lookup as per the above example ? Thanks
Hi Experts, Question: Anyone know how to change the STS endpoint to private VPCe Interface address when adding an account to ADD-ON for AWS during setup?   I am trying to deploy Splunk on a VM in... See more...
Hi Experts, Question: Anyone know how to change the STS endpoint to private VPCe Interface address when adding an account to ADD-ON for AWS during setup?   I am trying to deploy Splunk on a VM in private subnet (no route to the internet) in a VPC in AWS, and to index data on S3 (and more later). Currently, I have set up VPC endpoint (interface) for S3 and STS, and confirmed those 2 endpoints are accessible from the VM via an account from awscli. When I tried to add an account in add-on Account setup, add-on actually tried to talk STS through public STS which the private network does not have route to.  I would like to change add-on configuration to have the addon talk to private STS VPCe address to complete the setup/adding an account. If there is another way to have splunk run in a private subnet, I would like to know about it. Any comment would be appreciated.. Thank you! 
I am looking for a solution to transfer logs from Splunk and store them in MongoDB, can anyone suggest me?