This is similar to an example in the Exploring Splunk book https://www.splunk.com/pdfs/exploring-splunk.pdf
sourcetype=access_combined earliest=-2d@d latest=@d
| eval marker = if (_time < relative_time(now(), "-1d@d"),
"last week", "this week")
| eval _time = if (marker=="last week",
_time + 24*60*60, _time)
| timechart avg(bytes) by marker
Modify it to fit your time frames and your base search, but the concept should help you.
... View more
You are using the from field as the key for your transaction, but then you are specifying startswith and endswith which are also referencing the from field. You need to use a field that will uniquely identify each transaction. The startswith and endswith should be identifying the beginning event and ending event for that transaction.
... View more
Add the eval statement that I added above your last line of the search.
sourcetype="fire-ext_prd_app" NOT cv
| eval Mac = proctorCacheOS2
| eval Windows = proctorCacheOS
| spath output=proctorCacheOS path="msg0.OS"
| spath output=proctorCacheOS2 path="msg0.OS"
| search proctorCacheOS=Windows* OR proctorCacheOS2=Mac*
| eval winmac=case(proctorCacheOS like "Windows%","Windows",proctorCacheOS like "Mac%","Mac")
| top limit=50 proctorCacheOS
... View more
The benefit of filtering at the UF is the ability to easily change the blacklisting. If you are filtering out things at the WEC, then the data isn't there at all. So, in short, if you think you might need it at some point, then I would say to collect it and blacklist.
... View more
You could check out this NTPDrifter app to help identify the problem servers.
https://splunkbase.splunk.com/app/1292/
As for fixing the issue, I believe the best way is to correct the NTP problem at the source. Trying to manipulate your searches to account for it would really just be a band-aid and not even sure there would be a good way to do it.
... View more
I'm not sure if it meets your requirements as a "Health Monitoring" tool, but check out Cisco Security Suite app on splunkbase.
https://splunkbase.splunk.com/app/525/#/overview
... View more
Is this what you are looking for?
sourcetype="WinEventLog:Security" EventCode=4625 earliest=-15m@m
| eval userfield=mvindex(Account_Name,1)
| stats count as failedlogins by userfield
| where failedlogins > 4
I did the userfield extraction because Account_Name is usually a multivalued field. My demo data has a - in there. To change the time window, modify the earliest=-15m@m in the first line. To change the threshold, modify the where clause.
... View more
You should be able to upload your app through the GUI, I believe under Manage Apps. It will automatically try to vet the app for you and indicate anything that needs to be fixed. Depending on the app, it may still require you to have Splunk Cloud Ops do some vetting, which you can do via opening a ticket.
... View more
Is this what you were looking for?
index="yourindex" sourcetype="yoursourcetype"
| eventstats min(c3) as min max(c3) as max by c
| eval c4=max-min
| table c c2 c3 c4
| sort c c2
... View more
I would extract the Table name as a field if you haven't already, the try something like the following:
(tablename=TBL1 OR tablename=TBL2 OR tablename=TBL3)
| eventstats count as totalevents
| eventstats count as tablecount by tablename
| eval percentage=(tablecount/totalevents)*100
| stats values(tablecount) as totalevents values(percentage) as table_perc by tablename
I only covered 3 of the tablenames, but you get the idea.
... View more
I would concatenate the date_hour and date_wday fields to use as a key for your lookup.
[YOUR BASE SEARCH HERE]
| eval day_hour=date_hour . "_" . date_wday
| table _time day_hour
This would create a field called day_hour that looks like the following: 9_monday.
Your CSV would then look like the following:
day_hour,threshold
9_monday,50000
10_monday,60000
11_monday,50000
... View more
Try using the split command in your searches. Something like the following:
| makeresults
| eval commadelim="value1, value2, value3"
| eval newmvfield=split(commadelim,",")
... View more
I just realized my search was a bit off based on your request. You wanted to know the number of starts and stops per session. That would look more like the following:
[YOUR BASE SEARCH HERE]
| stats count(eval(action="start")) as starts count(eval(action="stop")) as stops values(user) as users values(computer) as computers by session
The users and computers fields would have a multivalued list of all distinct values for the user and computer fields. Not sure if that is what you wanted, but probably a good idea since there would be multiple values based on your data sample.
... View more
Try something like this:
[YOUR BASE SEARCH HERE]
| stats count(eval(action="start")) as starts count(eval(action="stop")) as stops by session user computer
I created a field called action for start and stop values, as well as giving the other fields logical names: session; user; and computer.
... View more
Reach out to your Splunk sales team. They should be able to generate a reset key for you. I'm not positive, but it should work for the personalized dev/test licenses the same way it does for a regular license.
... View more
Splunk supports ingesting ASCII / human-readable data. You could technically ingest the files, but it really wouldn't be anything understandable. If there is a way to convert the binary to ASCII, then you could use a scripted input to run a script against the binaries. The script could dump the ASCII output to a directory which you could then monitor.
As for monitoring a whole directory, take a look at the docs here: https://docs.splunk.com/Documentation/Splunk/7.2.6/Data/Monitorfilesanddirectorieswithinputs.conf
... View more
There is a new mobile app for Splunk: https://docs.splunk.com/Documentation/Alerts/1.4.0/Alerts/Installation
It requires the Splunk Cloud Gateway: https://splunkbase.splunk.com/app/4250/#/overview which is only supported on 7.2 and up.
... View more
This can be hard to quantify, because it will depend on a lot of factors. Here is the documentation on monitoring files and directories:
https://docs.splunk.com/Documentation/Splunk/7.2.6/Data/Monitorfilesanddirectorieswithinputs.conf
A few things I have run into that can help keep CPU usage lower:
Be as specific as possible with your inputs. Use of wildcards can be too greedy sometimes and you may be looking at files that aren't necessary.
Be careful with the use of recursive=true for similar reasons. Unless you know you need all of the subdirectories in a directory, use this carefully.
ignoreOlderThan can be helpful when first adding an input. Reading in a large file can initially cause high CPU usage until it catches up.
Hope this helps somewhat.
... View more
You would probably have to settle for close to the same time, but you could do something with the transaction command:
sourcetype=sourcetype1 OR sourcetype=sourcetype2
| transaction username maxspan=10s
| eval st_count=mvcount(sourcetype)
| where eventcount > 1 AND st_count > 1
| table username eventcount st_count
Take note of the maxspan=10s bit. This is going to be your tolerance for how far away from each other, the events can be. The next line, where we do the mvcount is so we can make sure we are getting events from both of your sourcetypes. This is counting the number of values in a multivalued field which is generated by the transaction command. The eventcount field is automatically generated when you use transaction. The Transaction command will group all events within the maxspan where username is the same.
Hopefully this will work for your use case.
... View more