All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is like that for couple of minutes. I tried to refresh the page and reupload and it behaves the same.  What can I do?      
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column?  I am able to set the new token to the value (number) or name (count) but that doesn'... See more...
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column?  I am able to set the new token to the value (number) or name (count) but that doesn't give me what I need.  I need to pass the X label to a second search.
Try something like this ... | sort src_host Service _time | streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service | eval problem_start=if(event_type="PROBLEM" AN... See more...
Try something like this ... | sort src_host Service _time | streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service | eval problem_start=if(event_type="PROBLEM" AND (isnull(previous_event_type) OR previous_event_type != "PROBLEM"),_time,null()) | streamstats max(problem_start) as problem_start by src_host Service global=f | eval problem_time=if(event_type="PROBLEM" OR previous_event_type="PROBLEM",_time-problem_start,null()) | where problem_time > 900
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your event... See more...
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your events - I've come across this issue too. The Veeam app includes a "veeam_vbr_syslog : EXTRACT-original_host" field extraction that wasn't working for me - it used this regex: \d+-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[\+\-]\d{2}:\d{2}\s(?<original_host>\S+) This is expecting the "original_host" to be listed in the raw event after the timestamp and a space. Are you sending syslog direct to Splunk as per the Veeam App documentation, or are you sending it via SC4S or another syslog server? In the scenario I came across this issue, Veeam was sending syslog to SC4S which was stripping the timestamp out of the raw event, therefore breaking the original_host extraction. SC4S was actually setting the "host" value for each event correctly, so I was able to add a Field Alias instead - set to apply to the veeam_vbr_syslog sourcetype and set host = original_host like this:  
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following... See more...
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following: | tstats count WHERE index=* BY sourcetype host | append [ | inputlookup perimeter.csv | eval count=0 | fields host sourcetype count ] | stats sum(count) AS total BY sourcetype host | where total=0 if you don't have this list and you want to check hosts that sent logs in the last weeb but not in tha last hour, you could run: | tstats count latest(-time) AS _time WHERE index=* BY sourcetype host | eval period=if(_time<now()-3600,"previous,"latest") | stats dc(period) AS period_count values(period) AS period BY sourcetype host | where period_count=1 AND period="previous"  The first solution gives you more control but requires to manage the perimeter lookup. Ciao. Giuseppe
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either... See more...
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either rewrite your search to have a more granular set ot the "by" fields (but if you get too many of them you might get too many results) and then later additionally summarize your events (for example using eventstats) or simply use two separate searches.
You can use a tool like https://www.nirsoft.net/utils/simple_wmi_view.html to verify your WQL.
OK. Different retention periods is a valid reason for distributing data between different indexes. The caveat with splitting data this way is that while configuration like [mysourcetype] TRANSFORM... See more...
OK. Different retention periods is a valid reason for distributing data between different indexes. The caveat with splitting data this way is that while configuration like [mysourcetype] TRANSFORMS-redirect=redirect_to_index1,redirect_to_index2,redirect_to_index3... is valid, you have to remember that all transforms will be called for each event. So Splunk will try to match each of the regexes contained withih every transform to each event. The more indexes you want to split to, the more work the indexer (or HF, depending on where you put this config) will have to do. Additional question - where are you getting the data from? Maybe it would be better to split the event stream before it's hitting Splunk.
I tried with the WQL that is there in Splunk App for Windows default. It is giving the same error.   i am using WMI because I want to fetch the near real time resource consumption wrt services runn... See more...
I tried with the WQL that is there in Splunk App for Windows default. It is giving the same error.   i am using WMI because I want to fetch the near real time resource consumption wrt services running on windows. That information is not coming via Perfmon.
I am using two stats, 1. 1st stats has some fields filtered by _time        | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time 2. 2nd stats has some fields filtered wit... See more...
I am using two stats, 1. 1st stats has some fields filtered by _time        | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time 2. 2nd stats has some fields filtered without time     | stats sum(TS>3S) As AvgImpact       count(URI) as DataOutage by Product URI Method  I want the both stats fields to be displayed in the result. for.eg , | fields TotalTrans Product URI Method AvgImpact DataOutage   How can I achieve this ?
I have a deployment where multiple computers are sending logs to a WEF server using WEF(windows event forwarding). I tried to map ComputerName field to host name field but failed to do so. Now I wan... See more...
I have a deployment where multiple computers are sending logs to a WEF server using WEF(windows event forwarding). I tried to map ComputerName field to host name field but failed to do so. Now I want to create an alert if any of the computer is not sending logs to splunk. how can i do so.   The method defined by splunk is based on index,host and sourcectype field, which will remain same for all computers in our case.
https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsoverview/ https://docs.splunk.com/Documentation/Splunk/latest/Data/Getdatafromscriptedinputs Writing a custo... See more...
https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsoverview/ https://docs.splunk.com/Documentation/Splunk/latest/Data/Getdatafromscriptedinputs Writing a custom script would be of course up to you. Monitoring an intermediate file is just normal file ingestion so nothing extraordinary.
Please find my answers in bold.   Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? If... See more...
Please find my answers in bold.   Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? If the PROBLEM alert is not RECOVERED after 15minutes, we need to trigger a script. Are you only interested in whether the last problem (without a recovery) was over 15 minutes ago? YES Can you get multiple problems (without recovery) events for the same problem? Yes, I am running this on edge nodes which are limited hosts. It could be multiple hosts as well. Does the 15 minutes start when the PROBLEM event for the latest PROBLEM first occurs? YES Does the 15 minutes start when the PROBLEM event for the latest PROBLEM last occurs? NO How far back are you looking for these events? last 30 minutes How often are you looking for these events? Every 15 minutes     Can you check below snippet as well,    
The error says it all. The wql parameter needs a valid WQL query to retrieve the data. Yours is not a proper WQL query. BTW, why are you using WMI? This is one of the worst ways of getting data from... See more...
The error says it all. The wql parameter needs a valid WQL query to retrieve the data. Yours is not a proper WQL query. BTW, why are you using WMI? This is one of the worst ways of getting data from Windows.
Please clarify your requirements. Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? Ar... See more...
Please clarify your requirements. Do you need an alert if there has been a problem which has not been recovered within 15 minutes in your data even if it was recovered after 16 minutes or later? Are you only interested in whether the last problem (without a recovery) was over 15 minutes ago? Can you get multiple problems (without recovery) events for the same problem? Does the 15 minutes start when the PROBLEM event for the latest PROBLEM first occurs? Does the 15 minutes start when the PROBLEM event for the latest PROBLEM last occurs? How far back are you looking for these events? How often are you looking for these events?
Or only the data manager will be the only solution for this kind of input?
Hi, I met an input issue about s3, which stays not in a aws security lake. Is that possible to use Splunk addon for aws to ingest s3 bucket with parquet formatted files?   
Thanks for the response any reference link to achieve the same would be helpful.
Hi, Apologies if I'm using the wrong terminology here. I'm trying to configure SC4S to override the destination indexes of types of sources. For example, if an event is received from a Cisco firewa... See more...
Hi, Apologies if I'm using the wrong terminology here. I'm trying to configure SC4S to override the destination indexes of types of sources. For example, if an event is received from a Cisco firewall by default it'll end up in the 'netfw' index. Instead, I want all events that would have gone to 'netfw' to go to, for example, 'site1_netfw'. I attempted to do this using the splunk_metadata.csv file but I now understand I've misinterpreted the documentation. I had used 'netfw,index,site1_netfw' but if I understand correctly, I'd actually need to have a seperate line for each key such as 'cisco_asa,index,site1_netfw'. Is that correct? Is there a way to accomplish what I want without listing each source key? Thanks
Perfect, just to fast-track the process of getting service KPI ids we can use "service_kpi_lookup" to find kpi_id and directly search using that id in saved searches to spot KPI base search. | input... See more...
Perfect, just to fast-track the process of getting service KPI ids we can use "service_kpi_lookup" to find kpi_id and directly search using that id in saved searches to spot KPI base search. | inputlookup service_kpi_lookup | search title="your_service_name"