All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have you tried my suggestion?
Yes, Microsoft generates incident IDs that are unique and collision-free for each incident. I'm going to try to disable it
Sounds like you are doing everything right, having said that, I don't use throttling by incident id, so perhaps there is an issue there? Are the incident ids completely unique? Is there a pattern to ... See more...
Sounds like you are doing everything right, having said that, I don't use throttling by incident id, so perhaps there is an issue there? Are the incident ids completely unique? Is there a pattern to the incidents which are getting missed?
@yuanliu wrote: This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first.   | inputlookup pod_name_lookup where NOT [search index=ab... See more...
@yuanliu wrote: This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first.   | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl | eval pod_name = mvindex(split(pod_name, "-"), 0) | stats values(pod_name) as pod_name] | stats dc(pod_name) as count values(pod_name) as pod_name by importance   This query gets me really close. The one edge case I did not bring up is that some pods have multiple parts of the expected name that are also split by dashes. For example, I would have this in the lookup: podd-unique-name critical   and need to match podd-unique-name-h98erg-n2439f Running critical from the results.   Yes the "importance" in both will match exactly, but it is only important in the lookup field. The goal of this is to display pods that are not found in the search results compared to the inputlookup, and using the "importance" field from the lookup display the missing pod name and importance.
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events reported  ----------- index=firewall (sourcetype=coll... See more...
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events reported  ----------- index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block |table event_type, hostname, ip ------------- Events reported ----------- index=firewall (sourcetype=collector OR sourcetype=metadata) event_type="error" |table event_type, hostname, ip ------------ No events reported index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block event_type="error" |table event_type, hostname, ip
@sjringo - what should be my trigger condition ?  Also how your query will identify which date as I don't want alert not to be triggered everyday from 21:00 to 4:00 am. I want just specific date whi... See more...
@sjringo - what should be my trigger condition ?  Also how your query will identify which date as I don't want alert not to be triggered everyday from 21:00 to 4:00 am. I want just specific date which is going to be 23rd april from 9 pm to 24th april 4 am   
Oh, I have put a lot of information about it like the example I gave. I have put the search query, an example of an event, the alert configuration, etc. They are events ingested by the Microsoft secu... See more...
Oh, I have put a lot of information about it like the example I gave. I have put the search query, an example of an event, the alert configuration, etc. They are events ingested by the Microsoft security API, coming from the Defender, and the queries are basic, if the title of the events is x, it is triggered. It is already desperation, because if you run the search normally, it detects the event it should but the alert has not been generated. So the only option I can think of is the indexing time, but I understand that if the search runs every 5 minutes and searches the entire previous hour, there should be no problem and there still is. These alerts are very important to me, and they must appear no matter what. In the example I mentioned at the beginning: TimeIndexed = 2024-04-04 01:01:59 _time=04/04/2024 00:56:08.600
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message | makeresults | eval ... See more...
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":\"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM\"}" | spath | rename message as _raw | extract | rex "InvetoryDate=(?<InvetoryDate>.*)" If the fields were re-ordered or an extra field was in the message (without an embedded space),  then the rex would not be required | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":\"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM, Tail=True\"}" | spath | rename message as _raw | extract
Thank you so much for prompt reply. Below is the fixed format of the data. Please help me on this.  {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804,... See more...
Thank you so much for prompt reply. Below is the fixed format of the data. Please help me on this.  {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM"}  Need to extract fields in below format. Your help really appreciated.   CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM
@dc17 - I'm not sure what logs you are trying to find in the EventViewer. Is it any known Application logs are you trying to find??
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnam... See more...
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnames from lookup into the results received from the first query. Finally, | stats count by customer_name hostnames  is going to count the value as 1 if it's present only in lookup, otherwise, if it's present in the first part of search and lookup then the count is going to be evaluated as 2? Is that correct? However, in the result there are no values with count as 2. Which is unlikely as there are few hosts which are present in events and lookup as well. Here, we try to fetch the events that contains hostnames(configured to receive application logs) and then compare them with the list of servers present in lookup(if found in lookup only then count=1).  However it seems that the query still isn't performing the required search as there are no values with count as 2. Here I observer count is returned as 1 for the hosts that are received from the events in below index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name There is almost no trace of values in the lookup, so I'm not sure if they are even being compared. And this is what the issue earlier was.   How can these two lists be compared and listed?
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on ... See more...
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on "wait state name"  For Example wait state id 59 = Using CPU Thanks Cansel
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if... See more...
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if(current_time >= 2100 AND current_time < 0400, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Your sample data is inconsistently formatted, e.g. sometimes there is a space before/after the =/, Please confirm the exact pattern your data will take so we don't waste effort on invalid data.
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with no... See more...
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with noexec.   Our solution was to add TMPDIR=<writable path> to splunk-launch.conf. 
Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index... See more...
Since you appear to have a one-to-one relationship between label and index, just include both in the by clause <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label,index</query>
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats... See more...
In order to do calculations or meaningful comparisons with dates and times, they need to be converted (parsed) to unix-style timestamps. | eval datetime_unix=strptime(DATETIME, "%F %T") | eventstats max(datetime_unix) as last_datetime | where datetime_unix == last_datetime | stats count by market_code
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields y... See more...
Firstly, join is not a very friendly command, it has its quirks. In this case I'd rather use either append or inputlookup append=t Another thing - if you do mvexpand on multiple multivalued fields you'll get a product of both sets of values. It can escalate quickly for bigger data sets. See the run-anywhere example | makeresults | eval a=split("1,2,3,4,5",",") | eval b=split("1,2,3,4,5",",") | mvexpand a | mvexpand b Of course you can do some fancy set arithmetics on those multivalued fields but it's usually easier done another way - count and filter. This part is OK, it lists which customers use which hosts index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name You're gonna get a list of hosts per customer. What is important here is that each host will be listed only once per customer. So we expand our list by the servers defined for each customer | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] So now for each customer you have a single value of hostnames field per customer per each host. Nothing easier now than counting what we have | stats count by customer_name hostnames So for each pair of customer_name and hostnames values you will have a count value indicating whether it was only present in the lookup (value of 1) or both in the lookup and the indexed events (value of 2). Now you can easily manipulate the data - filter, make a table, whatever. All this assumes that you don't have any hosts in the event data which are not in the lookup. If you can have such situation, the search is getting a bit more complex because you need to add additional field with two different numerical values, depending on whether the data came from the events or from the lookup and do a sum() instead of count in your final stats so you can see where the data came from.
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches... See more...
You are not giving much away! You will need to do some digging! Which events are not being picked up? When do they occur and when do they get indexed? How do these times relate to your alert searches? How important are these missed alerts? How much effort do you want to spend finding these events?
index=firewall (sourcetype=collector OR sourcetype=metadata) (enforcement_mode=block OR event_type="error") |table event_type, hostname, ip