All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see not... See more...
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see notes and how the older case was solved.  2. by enabling “logging” for a playbook, where opt logs are stored to access later on (beside vie debugging in the UI..)   thank you in advance!
Hi @fishn  To match the partial string in the lookup (e.g. poda) with the data (e.g. "poda-284489-cs834"), you need to append each of the pod_name_lookup values with a wildcard asterisk, i.e. poda*,... See more...
Hi @fishn  To match the partial string in the lookup (e.g. poda) with the data (e.g. "poda-284489-cs834"), you need to append each of the pod_name_lookup values with a wildcard asterisk, i.e. poda*, podb*, podc* Then, add a lookup definition with the following setting, under the Advanced options checkbox: Then in your search: (where lkp_pod_name is your lookup definition)   | lookup lkp_pod_name pod_name_lookup as pod_name    --- Next, to show which pods are missing and their importance, you can do it like this: index=abc sourcetype=kubectl | eval Observed=1 | append [| inputlookup lkp_pod_name | eval Observed=0 ] | lookup lkp_pod_name pod_name_lookup as pod_name OUTPUT pod_name_lookup | stats max(Observed) as Observed by pod_name_lookup, importance | where Observed=0  --- Finally, to count how many critical and non-critical pods are not found as well as table the list of missing pods, you can append this line to the above search: | eventstats count as count_by_importance by importance  
Hi Just curious - what are the security implications to think about enabling this. If used in conjunction with the trusted domain list in web-features.conf - we should be secure? Or is there someth... See more...
Hi Just curious - what are the security implications to think about enabling this. If used in conjunction with the trusted domain list in web-features.conf - we should be secure? Or is there something else?
In your first attempt it should have been like this index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | stats count b... See more...
In your first attempt it should have been like this index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | stats count by host | inputlookup append=t host_lookup.csv | fillnull count value=0 | stats max(count) as count by host | where count<100  but if you want to search at 1 minute granularity, what if one minute is > 100 and 1 is < 100   
Thanks for your practical answer, this is not what I asked for but really what I need. Appreciate it very much!
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: ... See more...
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: Automatic SSL enablement is not permitted on the deployer". Splunk support have recommened to change the setting on web.conf to "splunkdConnectionTimeout = 3000", which I added to the system file and restarted the splunkd. Unforutnately this timeout setting does not help fix this "known issue". I have selected Enable SSL option in the Post Config Process as I know that SSL is enabled in both the Deployer and SH web configs. If anyone has a work around for this or can suggest how I can enable SSL after the post configuration of Splunk ES on both the SH and Deployer, it would be appreciated.   Thanks
Hi @vm_molson is the search for the lookup file within the same dashboard, or some other dashboard linked from a drilldown?  If it's within the same dashboard, you can simply add something like this... See more...
Hi @vm_molson is the search for the lookup file within the same dashboard, or some other dashboard linked from a drilldown?  If it's within the same dashboard, you can simply add something like this to the search for the lookup: | search date&lt;=$global_time.latest$ date&gt;=$global_time.earliest$ However if you want to link to a different search, you might need to go down this route (link) where you would add the variables directly in the URL parameters of the link the user would click on. 
index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | eval event_ct=1 | append [| makeresults | eval host="appdataj... See more...
index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | eval event_ct=1 | append [| makeresults | eval host="appdatajs01, appdatajs02, appdatajs03, appdatajs04" | rex field=host mode=sed "s/\s+//g" | eval host=split(host,",") | mvexpand host | eval event_ct=0 ] | stats sum(event_ct) AS event_ct BY host | where event_ct <100  tried the above query its working as expected, but i need to see data in a span of1m i tried add |bin span=1m but its not working Can anyone help on this request????
@bowesmana , tried below query but its not working   index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | inputlookup... See more...
@bowesmana , tried below query but its not working   index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 | inputlookup append=t host_lookup.csv | fillnull count value=0 | stats count by host   below is the csv file used: Hosts appdatajs01 appdatajs02 appdatajs03 appdatajs04
This is the classic proving the negative issue. Splunk will count events based on your search criteria. It's can't create new categories of result for things that are not there. You need to explicitl... See more...
This is the classic proving the negative issue. Splunk will count events based on your search criteria. It's can't create new categories of result for things that are not there. You need to explicitly add the hosts into the final result table if they are not present in the first count. You can do something like this, which will add in zero values for your 4 hosts and then max the count for each host index=app-logs sourcetype=app-data source=*app.logs* host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host | appendpipe [ | where a=1 | makeresults | fields - _time | eval host=split("appdatajs01,appdatajs02,appdatajs03,appdatajs04",",") | mvexpand host | eval count=0 ] | stats max(count) as count by host | where count<100 Note that your statement "|bin span=1m _time" does nothing because you have no time field after the stats command Normally with this proving the negative technique, you would add all the hosts you are interested in into a lookup file and instead of appendpipe, use | inputlookup append=t host_lookup.csv | fillnull count value=0 | stats...
Any update on this issue ?
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time W... See more...
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time We have an alert with the above query,  Alert is getting triggered when the count of hosts is less than 100. but not getting triggered when the count of any  host is zero. How to make the alert to trigger even when the count=0
Great, thanks - that makes it easier! OK, so it looks like you are trying to compare fields in two separate events - you can't do that unless you collapse the two. You should use rex to extract a s... See more...
Great, thanks - that makes it easier! OK, so it looks like you are trying to compare fields in two separate events - you can't do that unless you collapse the two. You should use rex to extract a single filename and then do something similar to my previous post. Here's an example that hopefully will point you in the right direction. It creates two events 60 seconds apart each containing a filename - the rex statements extract filename and logtype and the stats will join the events together and by using min and max on _time you can get the start and end times for the pair of events. The final where clause will ensure that you have seen both loga and logb events. | makeresults | eval v=split("log a: There is a file has been received with the name test2.txt###log b: The file has been found at the second destination C://user/test2.txt", "###") | mvexpand v | streamstats c | eval _time=now()-(60*c) | rename v as _raw ``` Above is simply a data set up example ``` | rex field=_raw "(/[a-zA-Z0-9]+\/|name )(?<filename>[^\"]*)" | rex field=_raw "log (?<logtype>\w)" | stats count min(_time) as Starttime max(_time) as Endtime values(logtype) as logtype by filename | where count=2 AND logtype="a" AND logtype="b" | eval diff = Endtime - Starttime Hope this helps.
This is exactly what I am doing, nothing more.  Let me try your logic.   index= cloudaccount= cloudenv=impl source= (string in log a OR string in log b) | rex field=_raw "/[a-zA-Z0-9]+\/(?<filename... See more...
This is exactly what I am doing, nothing more.  Let me try your logic.   index= cloudaccount= cloudenv=impl source= (string in log a OR string in log b) | rex field=_raw "/[a-zA-Z0-9]+\/(?<filename>[^\"]*)"| rex field=_raw "[a-zA-Z0-9]+\/(?<filename2>[^\"]*)" | eval Endtime = strftime(_time, "%H:%M:%S:%Q") | eval Starttime = if(match(filename,"found %".filename2."%"),1,0) | stats values(Starttime) by filename    
Where is it showing the same results? Is it a scheduled saved search and you are looking at recent results? Is it a saved search you run manually or from a dashboard?  
In your rex example you said | rex field=event.url ... that is why SOURCE_KEY is event.url - as that is where the urls are coming from right? Your rex example indicated you are extracting the url ... See more...
In your rex example you said | rex field=event.url ... that is why SOURCE_KEY is event.url - as that is where the urls are coming from right? Your rex example indicated you are extracting the url into a field called url_domain, which is also what is in the transforms.
If you have two events, you can't just match things between events - the text from loga does not exist when running the match statement for the logb data. Without seeing your SPL it's hard to know w... See more...
If you have two events, you can't just match things between events - the text from loga does not exist when running the match statement for the logb data. Without seeing your SPL it's hard to know what you are doing - can you post the entire SPL - please do this in a code block (</> button) If you have two events, you need to correlate them together using stats on a common field, in this case, your file name, so extract the file name from both events and then define a "message type" - log a or b and then you can do something like this logic | eval logtype=if(condition..., "loga", "logb") | rex "....(?<filename>....)" | stats count values(logtype) as logtypes min(_time) as StartTime max(_time) as EndTime by filename | where count>1 AND logtypes="loga" AND logtypes="logb"  
So with the "SOURCE_KEY = event.url" what I do is call the field where I want to get the information from?  In my case it would be the urls stored there.
You can do this in the UI - go to Settings->Fields-Field Transformations and add the regex and the field you want to extract from and then in Field Extractions add a new Extraction using transforms a... See more...
You can do this in the UI - go to Settings->Fields-Field Transformations and add the regex and the field you want to extract from and then in Field Extractions add a new Extraction using transforms and reference the Field Transformation. This will translate to something like this in props/transforms conf files In transforms.conf you will need   [url_domain] CLEAN_KEYS = 0 REGEX = ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) SOURCE_KEY = event.url    In props.conf    [sec-web] REPORT-file_name = url_domain    
Yes log b and log a have the same index=a env=a account=. SPL -----> rex field=_raw "The file has been found at the second destination[a-zA-Z0-9]+\/(?<filename2>[^\"]*)" This works I get the file n... See more...
Yes log b and log a have the same index=a env=a account=. SPL -----> rex field=_raw "The file has been found at the second destination[a-zA-Z0-9]+\/(?<filename2>[^\"]*)" This works I get the file names. This is exactly the logs that I am trying to match, I was using if(like....) at first.               log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt