All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It is not clear whether there is an issue - to me it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the summary index for the earlier weeks before the sched... See more...
It is not clear whether there is an issue - to me it looks like the reports that were run on Feb 29th were done manually / ad hoc to back-fill the summary index for the earlier weeks before the schedule was set up and running correctly.
@ITWhisperer  Both the Saved searches are running at the same time. In your view, Is this causing the issue ?  
| stats count by index | stats count
ep_winevt_ms* - This index is mapped in Data Model Macros.   I want to exclude all other indexes in (ep_winevt_ms*) and take the count as 1 to know the unique indexes. @ITWhisperer 
If I give you a conversion, how will you know whether it is correct or not?
I’m not sure. Give me an example so that I can try that @ITWhisperer 
| stats count will give you the count of events returned by the search of all the ep_winevt_ms* indexes. Why do you think this is not the case?
I have 10 indexes starts with "ep_winevt_ms" . So i am using * here "index=ep_winevt_ms*". But while taking the | stats count i want only 1 count for the entire "ep_winevt_ms*". I don't want 10 coun... See more...
I have 10 indexes starts with "ep_winevt_ms" . So i am using * here "index=ep_winevt_ms*". But while taking the | stats count i want only 1 count for the entire "ep_winevt_ms*". I don't want 10 count for "ep_winevt_ms*". Please help
OK what date in YYYY-mm-dd format would you expect 45123 to be shown as?
Like this @ITWhisperer  YYYY-mm-dd
MDI logs are generated on security.microsoft.com portal and are not present locally on the servers where Splunk forwarders and MDI sensor are installed. There is a possibility with Sentinel [ https:/... See more...
MDI logs are generated on security.microsoft.com portal and are not present locally on the servers where Splunk forwarders and MDI sensor are installed. There is a possibility with Sentinel [ https://learn.microsoft.com/en-us/azure/sentinel/microsoft-365-defender-sentinel-integration ] but we want to do this to Splunk.  We might not be able to install anything on the portal. Do we have a set of documentation available as to how to send the MDI logs from security.microsoft.com portal to Splunk ?
How about YYYY/mm/dd?
So, you need to configure the inputs for the forwarders so that they know where to look for the MDI logs https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Admin/IntroGDI  
There are a number of ways this might be done - here is one way | rex mode=sed field=description "s/(^.*)(engineId.+for address \S+)(.*$)/\2/g"
Format of the date you want based on that number? @ITWhisperer 
"Splunk forwarders" are installed on the servers where MDI sensor is installed.  So far, no ingestion has been done.
What are MDI logs? Where are they stored? Do you have Splunk forwarders on there too? There are a lot of unanswered questions about your environment and the potential ways that data can be ingeste... See more...
What are MDI logs? Where are they stored? Do you have Splunk forwarders on there too? There are a lot of unanswered questions about your environment and the potential ways that data can be ingested into Splunk. Have you ingested other data sources? Can you modify these to include the MDI logs?
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing ... See more...
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing failed. Unknown engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 . Dropping bind from /1.5.6.3. Required description engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 
What is the issue? Stash files are used by Splunk to serialise the events so that they can be indexed. The source can be overridden in the collect command. These are from two different reports - i... See more...
What is the issue? Stash files are used by Splunk to serialise the events so that they can be indexed. The source can be overridden in the collect command. These are from two different reports - if you are interested, you should look at the settings for those reports to see the differences in how they are sent to the summary index. As for the times, where the times are almost identical, this is likely to be due to a cron job, which potentially is from a unix-based system, whereas the sources with more different times look like they are from a Windows-based system, which doesn't usually have cron.
I have further investigated and seen that the Info_search_time for all the stash file is same. Please suggest any significance behind it ?