OK. So if your June had 23 working days, you want only sum of the license usage during those 23 days divided by 23, right? You simply count as if the week was 5 days long and completely ignore existe...
See more...
OK. So if your June had 23 working days, you want only sum of the license usage during those 23 days divided by 23, right? You simply count as if the week was 5 days long and completely ignore existence of saturdays and sundays? As you open the licensing report in search, you see something like this: index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" I suppose if you have a distributed environment you might not have the localhost part but some other form of choosing indexers. Anyway, since it's done right after midnight to calculate summarized amount of license used per day, the search behind the report substracts half a day (43200 seconds) from the _time field and then does binning over _time. And that's pretty much it - the b field contains sum of bytes indexed. Now you only have to filter out the saturdays/sundays (possibly with strftime) and do a stats avg and Robert is your father's brother.
I have created the graph on hourly basis so it will display counts on the bar based on hours . Now my Requirement is I wanna Display one message on the Graph Totalcount=X
Im looking for the daily average for each month excluding weekends all together. So for example, for September, what was the average daily ingest for all days Monday through Friday.
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk? Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example,...
See more...
Hi, I am trying to determine how to see what alerts are using specific indexes in Splunk? Is there a way to search that? So if I wanted to see all alerts that are using index=firewall, for example, how would I get that?
Please define what you mean by "average daily ingest excluding weekends". Do you mean to sum only values from monday to friday and divide by 5 days weekly or do you want to sum values from whole 7 da...
See more...
Please define what you mean by "average daily ingest excluding weekends". Do you mean to sum only values from monday to friday and divide by 5 days weekly or do you want to sum values from whole 7 days and divide by 5 days or maybe sum values from 5 days and divide by 7 days? (of course extrapolated to your whole search timerange but I mean how do you wanna treat those weekends).
I think I just answered that. Splunk is only part of the answer. After all, Splunk can't show you what isn't indexed. Therefore, if Splunk doesn't find a blacklisted event then the blacklist proba...
See more...
I think I just answered that. Splunk is only part of the answer. After all, Splunk can't show you what isn't indexed. Therefore, if Splunk doesn't find a blacklisted event then the blacklist probably is working. Confirm that by looking at Windows Event Viewer to see if a blacklisted event was generated.
values(*) as * means take the values of all other fields and put them into fields by the same name. So each field that existed before stats will exist after it, but possibly with more than one value...
See more...
values(*) as * means take the values of all other fields and put them into fields by the same name. So each field that existed before stats will exist after it, but possibly with more than one value in each.
As soon as you can into splunk the data about passwords we can help you search it. But you need to have that data. Splunk as such is "just" a data processing tool. EDIT: Typically querying for defa...
See more...
As soon as you can into splunk the data about passwords we can help you search it. But you need to have that data. Splunk as such is "just" a data processing tool. EDIT: Typically querying for default credentials is part of what vulnerability scanners do.
That's the funny part - I don't even have the TA. But I admit I haven't really gotten to the "let's use that data in any way" part which means I didn't care for extractions or CIM-compliance. I wasn'...
See more...
That's the funny part - I don't even have the TA. But I admit I haven't really gotten to the "let's use that data in any way" part which means I didn't care for extractions or CIM-compliance. I wasn't even aware that there is a TA for suricata. I just added an input to pull the events to splunk and that's it.
It's a way of telling Splunk to rename the fields. Normally if you just do | stats values(*) it will name the resulting fields values(fielda), values(fieldb), values(fieldc) and so on. If you just...
See more...
It's a way of telling Splunk to rename the fields. Normally if you just do | stats values(*) it will name the resulting fields values(fielda), values(fieldb), values(fieldc) and so on. If you just want to see what those values are that's no problem but that's not very convenient to work with such fields later. So if you do | stats values(*) as * The resulting mutivalued fields will be named the same as the original fields which you are summarizing were so instead of values(fielda) you'll still have fielda.
Thats the input file on the suricata server? Do you have the Suricata-TA installed on the forwarder or the server or both or are you even using the Suricata-TA.
More words please. Subsearch is getting executed (and its results are substituted) where it's placed. So if you do collect [...] sourcetype=[whatever subsearch you come up with] It will work. But...
See more...
More words please. Subsearch is getting executed (and its results are substituted) where it's placed. So if you do collect [...] sourcetype=[whatever subsearch you come up with] It will work. But that will give you one static value for the whole collect command. If you want to dynamically assign the "destination" sourcetype per each event separately, you must use the hec format.
That's interesting though because my whole config for ingesting suricata's eve.log boils down to this: [monitor:///var/log/suricata/eve.json] disabled = false host = backup index = net sourcetyp...
See more...
That's interesting though because my whole config for ingesting suricata's eve.log boils down to this: [monitor:///var/log/suricata/eve.json] disabled = false host = backup index = net sourcetype = suricata I don't even have anything configured for the suricata sourcetype. It just automatically gets parsed as json. I should get it configured more reasonably but it's my home lab server so I don't mind.