All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

resolved this.. by adding " --platform=linux/amd64 while pulling the image.. for instance  use  FROM --platform=linux/amd64 tiangolo/uvicorn-gunicorn-fastapi:python3.10 instead of  FROM tia... See more...
resolved this.. by adding " --platform=linux/amd64 while pulling the image.. for instance  use  FROM --platform=linux/amd64 tiangolo/uvicorn-gunicorn-fastapi:python3.10 instead of  FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10 
Hello @anandhalagaras1 , If you're creating a custom app, you'll need to write the configuration in your default directory; otherwise, it will give you an error during validation, and the app won'... See more...
Hello @anandhalagaras1 , If you're creating a custom app, you'll need to write the configuration in your default directory; otherwise, it will give you an error during validation, and the app won't pass the vetting process in Splunk Cloud.
Hello! I know this is an older post, but, I just tried the latest version of getwatchlist in Splunk Cloud, and your query works as expected now. Thanks!
Hi Can anyone please suggest where I can submit a bug report for dashboard visualisations? Thanks
Thanks for the response. Im not getting any matches though. Everything is coming back as count=0 even though there are entries in the lookup that should match.
Have you tried my suggestion?
Yes, Microsoft generates incident IDs that are unique and collision-free for each incident. I'm going to try to disable it
Sounds like you are doing everything right, having said that, I don't use throttling by incident id, so perhaps there is an issue there? Are the incident ids completely unique? Is there a pattern to ... See more...
Sounds like you are doing everything right, having said that, I don't use throttling by incident id, so perhaps there is an issue there? Are the incident ids completely unique? Is there a pattern to the incidents which are getting missed?
@yuanliu wrote: This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first.   | inputlookup pod_name_lookup where NOT [search index=ab... See more...
@yuanliu wrote: This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first.   | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl | eval pod_name = mvindex(split(pod_name, "-"), 0) | stats values(pod_name) as pod_name] | stats dc(pod_name) as count values(pod_name) as pod_name by importance   This query gets me really close. The one edge case I did not bring up is that some pods have multiple parts of the expected name that are also split by dashes. For example, I would have this in the lookup: podd-unique-name critical   and need to match podd-unique-name-h98erg-n2439f Running critical from the results.   Yes the "importance" in both will match exactly, but it is only important in the lookup field. The goal of this is to display pods that are not found in the search results compared to the inputlookup, and using the "importance" field from the lookup display the missing pod name and importance.
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events reported  ----------- index=firewall (sourcetype=coll... See more...
Something in my solution is not right. It works for only one condition (one or another) but combined produced zero events --------- Events reported  ----------- index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block |table event_type, hostname, ip ------------- Events reported ----------- index=firewall (sourcetype=collector OR sourcetype=metadata) event_type="error" |table event_type, hostname, ip ------------ No events reported index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block event_type="error" |table event_type, hostname, ip
@sjringo - what should be my trigger condition ?  Also how your query will identify which date as I don't want alert not to be triggered everyday from 21:00 to 4:00 am. I want just specific date whi... See more...
@sjringo - what should be my trigger condition ?  Also how your query will identify which date as I don't want alert not to be triggered everyday from 21:00 to 4:00 am. I want just specific date which is going to be 23rd april from 9 pm to 24th april 4 am   
Oh, I have put a lot of information about it like the example I gave. I have put the search query, an example of an event, the alert configuration, etc. They are events ingested by the Microsoft secu... See more...
Oh, I have put a lot of information about it like the example I gave. I have put the search query, an example of an event, the alert configuration, etc. They are events ingested by the Microsoft security API, coming from the Defender, and the queries are basic, if the title of the events is x, it is triggered. It is already desperation, because if you run the search normally, it detects the event it should but the alert has not been generated. So the only option I can think of is the indexing time, but I understand that if the search runs every 5 minutes and searches the entire previous hour, there should be no problem and there still is. These alerts are very important to me, and they must appear no matter what. In the example I mentioned at the beginning: TimeIndexed = 2024-04-04 01:01:59 _time=04/04/2024 00:56:08.600
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message | makeresults | eval ... See more...
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":\"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM\"}" | spath | rename message as _raw | extract | rex "InvetoryDate=(?<InvetoryDate>.*)" If the fields were re-ordered or an extra field was in the message (without an embedded space),  then the rex would not be required | makeresults | eval _raw="{\"id\":\"0\",\"severity\":\"Information\",\"message\":\"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM, Tail=True\"}" | spath | rename message as _raw | extract
Thank you so much for prompt reply. Below is the fixed format of the data. Please help me on this.  {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804,... See more...
Thank you so much for prompt reply. Below is the fixed format of the data. Please help me on this.  {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM"}  Need to extract fields in below format. Your help really appreciated.   CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM
@dc17 - I'm not sure what logs you are trying to find in the EventViewer. Is it any known Application logs are you trying to find??
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnam... See more...
So as I understand,   | append [     | inputlookup server_info.csv     | rename customer as customer_name     | stats values(host) as hostnames by customer_name ] is going to append the hostnames from lookup into the results received from the first query. Finally, | stats count by customer_name hostnames  is going to count the value as 1 if it's present only in lookup, otherwise, if it's present in the first part of search and lookup then the count is going to be evaluated as 2? Is that correct? However, in the result there are no values with count as 2. Which is unlikely as there are few hosts which are present in events and lookup as well. Here, we try to fetch the events that contains hostnames(configured to receive application logs) and then compare them with the list of servers present in lookup(if found in lookup only then count=1).  However it seems that the query still isn't performing the required search as there are no values with count as 2. Here I observer count is returned as 1 for the hosts that are received from the events in below index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name There is almost no trace of values in the lookup, so I'm not sure if they are even being compared. And this is what the issue earlier was.   How can these two lists be compared and listed?
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on ... See more...
Hi Hardik, You can use this ADQL as a simple SELECT `wait-state-id`, (`wait-state-id`) FROM dbmon_wait_time This query gathers data based on "wait state id" but you can change this query based on "wait state name"  For Example wait state id 59 = Using CPU Thanks Cansel
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if... See more...
Try this: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval is_maintenance_window=if(current_time >= 2100 AND current_time < 0400, 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1
Your sample data is inconsistently formatted, e.g. sometimes there is a space before/after the =/, Please confirm the exact pattern your data will take so we don't waste effort on invalid data.
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with no... See more...
Adding to this in case anyone else is having this issue.  It seems like when Python is executed something is attempting to write to /tmp which ends up with a memory error when /tmp is mounted with noexec.   Our solution was to add TMPDIR=<writable path> to splunk-launch.conf.