Hi AppDynamics Community, I have a scenerio where I have 6 different MariaDB instances running in 6 different containers on the same server host, and I have 1 Linux VM to installed the Database Agen...
See more...
Hi AppDynamics Community, I have a scenerio where I have 6 different MariaDB instances running in 6 different containers on the same server host, and I have 1 Linux VM to installed the Database Agent, so do I need 6 Database Agent licenses for the 6 collectors to configure? or do I need just 1 Database Agent for the VM in which I can configure the 6 collectors? Thanks in advance. Hope everybody have a great week! Regards
i just installed CEF Extraction add-on for splunk i want to try this for example
| makeresults
| eval _raw="CEF:0|vendor|product|1.0|TestEvent|5| filename=name.txt ip=10.10.1.2 fullname=mike reac...
See more...
i just installed CEF Extraction add-on for splunk i want to try this for example
| makeresults
| eval _raw="CEF:0|vendor|product|1.0|TestEvent|5| filename=name.txt ip=10.10.1.2 fullname=mike reacher status=ok"
| kv
| table fullname filename ip *
why it didnt work.. all this because default kv dont support multi string with whitespace
This worked for me! I had the same scenario, DS up and running but no clients displayed, after I deleted the instance file, then restarted, and it is working now!
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column? I am able to set the new token to the value (number) or name (count) but that doesn'...
See more...
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column? I am able to set the new token to the value (number) or name (count) but that doesn't give me what I need. I need to pass the X label to a second search.
Try something like this ...
| sort src_host Service _time
| streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service
| eval problem_start=if(event_type="PROBLEM" AN...
See more...
Try something like this ...
| sort src_host Service _time
| streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service
| eval problem_start=if(event_type="PROBLEM" AND (isnull(previous_event_type) OR previous_event_type != "PROBLEM"),_time,null())
| streamstats max(problem_start) as problem_start by src_host Service global=f
| eval problem_time=if(event_type="PROBLEM" OR previous_event_type="PROBLEM",_time-problem_start,null())
| where problem_time > 900
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your event...
See more...
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your events - I've come across this issue too. The Veeam app includes a "veeam_vbr_syslog : EXTRACT-original_host" field extraction that wasn't working for me - it used this regex:
\d+-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[\+\-]\d{2}:\d{2}\s(?<original_host>\S+)
This is expecting the "original_host" to be listed in the raw event after the timestamp and a space. Are you sending syslog direct to Splunk as per the Veeam App documentation, or are you sending it via SC4S or another syslog server? In the scenario I came across this issue, Veeam was sending syslog to SC4S which was stripping the timestamp out of the raw event, therefore breaking the original_host extraction. SC4S was actually setting the "host" value for each event correctly, so I was able to add a Field Alias instead - set to apply to the veeam_vbr_syslog sourcetype and set host = original_host like this:
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following...
See more...
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following: | tstats
count
WHERE index=*
BY sourcetype host
| append [
| inputlookup perimeter.csv
| eval count=0
| fields host sourcetype count ]
| stats sum(count) AS total BY sourcetype host
| where total=0 if you don't have this list and you want to check hosts that sent logs in the last weeb but not in tha last hour, you could run: | tstats
count
latest(-time) AS _time
WHERE index=*
BY sourcetype host
| eval period=if(_time<now()-3600,"previous,"latest")
| stats
dc(period) AS period_count
values(period) AS period
BY sourcetype host
| where period_count=1 AND period="previous" The first solution gives you more control but requires to manage the perimeter lookup. Ciao. Giuseppe
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either...
See more...
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either rewrite your search to have a more granular set ot the "by" fields (but if you get too many of them you might get too many results) and then later additionally summarize your events (for example using eventstats) or simply use two separate searches.
OK. Different retention periods is a valid reason for distributing data between different indexes. The caveat with splitting data this way is that while configuration like [mysourcetype] TRANSFORM...
See more...
OK. Different retention periods is a valid reason for distributing data between different indexes. The caveat with splitting data this way is that while configuration like [mysourcetype] TRANSFORMS-redirect=redirect_to_index1,redirect_to_index2,redirect_to_index3... is valid, you have to remember that all transforms will be called for each event. So Splunk will try to match each of the regexes contained withih every transform to each event. The more indexes you want to split to, the more work the indexer (or HF, depending on where you put this config) will have to do. Additional question - where are you getting the data from? Maybe it would be better to split the event stream before it's hitting Splunk.
I tried with the WQL that is there in Splunk App for Windows default. It is giving the same error. i am using WMI because I want to fetch the near real time resource consumption wrt services runn...
See more...
I tried with the WQL that is there in Splunk App for Windows default. It is giving the same error. i am using WMI because I want to fetch the near real time resource consumption wrt services running on windows. That information is not coming via Perfmon.
I am using two stats, 1. 1st stats has some fields filtered by _time | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time 2. 2nd stats has some fields filtered wit...
See more...
I am using two stats, 1. 1st stats has some fields filtered by _time | stats count(totalResponseTime) as TotalTrans by Product URI methodName _time 2. 2nd stats has some fields filtered without time | stats sum(TS>3S) As AvgImpact count(URI) as DataOutage by Product URI Method I want the both stats fields to be displayed in the result. for.eg , | fields TotalTrans Product URI Method AvgImpact DataOutage How can I achieve this ?
I have a deployment where multiple computers are sending logs to a WEF server using WEF(windows event forwarding). I tried to map ComputerName field to host name field but failed to do so. Now I wan...
See more...
I have a deployment where multiple computers are sending logs to a WEF server using WEF(windows event forwarding). I tried to map ComputerName field to host name field but failed to do so. Now I want to create an alert if any of the computer is not sending logs to splunk. how can i do so. The method defined by splunk is based on index,host and sourcectype field, which will remain same for all computers in our case.
https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsoverview/ https://docs.splunk.com/Documentation/Splunk/latest/Data/Getdatafromscriptedinputs Writing a custo...
See more...
https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsoverview/ https://docs.splunk.com/Documentation/Splunk/latest/Data/Getdatafromscriptedinputs Writing a custom script would be of course up to you. Monitoring an intermediate file is just normal file ingestion so nothing extraordinary.