All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi gcusello, Thanks for the reply,  Iam looking to get results like below. my base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)" Results Servic... See more...
Hi gcusello, Thanks for the reply,  Iam looking to get results like below. my base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)" Results ServiceName         Trans_Time Count A 60 1111 B 40 1234 Other_Services( C , D, E, F,G,H) 25 1234567
Hi @kc_prane , ony one question: what's time_Token? if it's a field, please try something like this: <your_search | eval services_names=if(services_names IN ("A", "B"), service_name, "Other_Servic... See more...
Hi @kc_prane , ony one question: what's time_Token? if it's a field, please try something like this: <your_search | eval services_names=if(services_names IN ("A", "B"), service_name, "Other_Services") | stats values(time_Token) AS time_Token BY services_names | table services_names time_Taken  otherwise, please explain what's time_Token, or apply my approach to your search. Ciao. Giuseppe
I Have  Service_names  (A, B ,C ,D, E,  F, G, H, I J, K, L , M)  but want  (C ,D, E,  F, G, H, I J, K, L , M ) services_names renamed as "Other_Services"  | Stats by  services_names  | table services... See more...
I Have  Service_names  (A, B ,C ,D, E,  F, G, H, I J, K, L , M)  but want  (C ,D, E,  F, G, H, I J, K, L , M ) services_names renamed as "Other_Services"  | Stats by  services_names  | table services_names  time_Taken Thanks in advance!
Hi @harishsplunk7 , please try this: index=_audit tag=authentication info=succeeded earliest=-30d@d latest=now | stats count BY user | append [ | rest /services/authentication/current-context... See more...
Hi @harishsplunk7 , please try this: index=_audit tag=authentication info=succeeded earliest=-30d@d latest=now | stats count BY user | append [ | rest /services/authentication/current-context | where NOT username="splunk-system-user" | eval count=0 | rename username AS user | fields user ] | stats sum(count) AS total BY user | where count=0 Ciao. Giuseppe
I am looking the for the search query to show of any of the user not logged into splunk.  For example, we have 1500 user accounts but only 1200 user logged into splunk for last 90 days and remaini... See more...
I am looking the for the search query to show of any of the user not logged into splunk.  For example, we have 1500 user accounts but only 1200 user logged into splunk for last 90 days and remaining 300 user are not logged, so i want to list the 300 users. i have retention period of 1 year.
It depends on the retention period of your indexes - essentially you need the latest time by user but if your retention period is not large enough you may not find the user you are looking for - all ... See more...
It depends on the retention period of your indexes - essentially you need the latest time by user but if your retention period is not large enough you may not find the user you are looking for - all that tells you is that there is no record for the user, which may or may not be useful.
Thank you for the help @yuanliu 
how to get the user not logged into Splunk for last 30 or 90days in splunk using audit or _internal index.  
Hi AppDynamics Community, I have a scenerio where I have 6 different MariaDB instances running in 6 different containers on the same server host, and I have 1 Linux VM to installed the Database Agen... See more...
Hi AppDynamics Community, I have a scenerio where I have 6 different MariaDB instances running in 6 different containers on the same server host, and I have 1 Linux VM to installed the Database Agent, so do I need 6 Database Agent licenses for the 6 collectors to configure? or do I need just 1 Database Agent for the VM in which I can configure the 6 collectors? Thanks in advance. Hope everybody have a great week! Regards
i just installed CEF Extraction add-on for splunk i want to try this for example  | makeresults | eval _raw="CEF:0|vendor|product|1.0|TestEvent|5| filename=name.txt ip=10.10.1.2 fullname=mike reac... See more...
i just installed CEF Extraction add-on for splunk i want to try this for example  | makeresults | eval _raw="CEF:0|vendor|product|1.0|TestEvent|5| filename=name.txt ip=10.10.1.2 fullname=mike reacher status=ok" | kv | table fullname filename ip * why it didnt work.. all this because  default kv dont support multi string with whitespace
This worked for me! I had the same scenario, DS up and running but no clients displayed, after I deleted the instance file, then restarted, and it is working now!
Yeah, that is the solution we ended up with as well.
Magic.  Thanks!
Use $row.<x-axis-fieldname>.value$ i.e. the name of the field used for your x axis
It is like that for couple of minutes. I tried to refresh the page and reupload and it behaves the same.  What can I do?      
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column?  I am able to set the new token to the value (number) or name (count) but that doesn'... See more...
Is it possible to set a token based on the value of the x-axis label on a column chart by clicking on the column?  I am able to set the new token to the value (number) or name (count) but that doesn't give me what I need.  I need to pass the X label to a second search.
Try something like this ... | sort src_host Service _time | streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service | eval problem_start=if(event_type="PROBLEM" AN... See more...
Try something like this ... | sort src_host Service _time | streamstats current=f window=1 last(event_type) as previous_event_type by src_host Service | eval problem_start=if(event_type="PROBLEM" AND (isnull(previous_event_type) OR previous_event_type != "PROBLEM"),_time,null()) | streamstats max(problem_start) as problem_start by src_host Service global=f | eval problem_time=if(event_type="PROBLEM" OR previous_event_type="PROBLEM",_time-problem_start,null()) | where problem_time > 900
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your event... See more...
The Veeam Backup & Replication Events (VeeamVbrEvents) Data Model requires the "original_host" field to be in events. Looking at your screenshots, it looks like that field is missing from your events - I've come across this issue too. The Veeam app includes a "veeam_vbr_syslog : EXTRACT-original_host" field extraction that wasn't working for me - it used this regex: \d+-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+[\+\-]\d{2}:\d{2}\s(?<original_host>\S+) This is expecting the "original_host" to be listed in the raw event after the timestamp and a space. Are you sending syslog direct to Splunk as per the Veeam App documentation, or are you sending it via SC4S or another syslog server? In the scenario I came across this issue, Veeam was sending syslog to SC4S which was stripping the timestamp out of the raw event, therefore breaking the original_host extraction. SC4S was actually setting the "host" value for each event correctly, so I was able to add a Field Alias instead - set to apply to the veeam_vbr_syslog sourcetype and set host = original_host like this:  
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following... See more...
Hi @Nawab , if you have a list of hosts to monitor, you could put it in a lookup (called e.g. perimeter.csv and containing at least two columns: sourcetype, host) and run a search like the following: | tstats count WHERE index=* BY sourcetype host | append [ | inputlookup perimeter.csv | eval count=0 | fields host sourcetype count ] | stats sum(count) AS total BY sourcetype host | where total=0 if you don't have this list and you want to check hosts that sent logs in the last weeb but not in tha last hour, you could run: | tstats count latest(-time) AS _time WHERE index=* BY sourcetype host | eval period=if(_time<now()-3600,"previous,"latest") | stats dc(period) AS period_count values(period) AS period BY sourcetype host | where period_count=1 AND period="previous"  The first solution gives you more control but requires to manage the perimeter lookup. Ciao. Giuseppe
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either... See more...
OK. You can't do something with the data you already removed in your search pipeline. So you can't do two separate stats commands with different aggregations and different sets of "by" fields. Either rewrite your search to have a more granular set ot the "by" fields (but if you get too many of them you might get too many results) and then later additionally summarize your events (for example using eventstats) or simply use two separate searches.