All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK what date in YYYY-mm-dd format would you expect 45123 to be shown as?
Like this @ITWhisperer  YYYY-mm-dd
MDI logs are generated on security.microsoft.com portal and are not present locally on the servers where Splunk forwarders and MDI sensor are installed. There is a possibility with Sentinel [ https:/... See more...
MDI logs are generated on security.microsoft.com portal and are not present locally on the servers where Splunk forwarders and MDI sensor are installed. There is a possibility with Sentinel [ https://learn.microsoft.com/en-us/azure/sentinel/microsoft-365-defender-sentinel-integration ] but we want to do this to Splunk.  We might not be able to install anything on the portal. Do we have a set of documentation available as to how to send the MDI logs from security.microsoft.com portal to Splunk ?
How about YYYY/mm/dd?
So, you need to configure the inputs for the forwarders so that they know where to look for the MDI logs https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Admin/IntroGDI  
There are a number of ways this might be done - here is one way | rex mode=sed field=description "s/(^.*)(engineId.+for address \S+)(.*$)/\2/g"
Format of the date you want based on that number? @ITWhisperer 
"Splunk forwarders" are installed on the servers where MDI sensor is installed.  So far, no ingestion has been done.
What are MDI logs? Where are they stored? Do you have Splunk forwarders on there too? There are a lot of unanswered questions about your environment and the potential ways that data can be ingeste... See more...
What are MDI logs? Where are they stored? Do you have Splunk forwarders on there too? There are a lot of unanswered questions about your environment and the potential ways that data can be ingested into Splunk. Have you ingested other data sources? Can you modify these to include the MDI logs?
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing ... See more...
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing failed. Unknown engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 . Dropping bind from /1.5.6.3. Required description engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 
What is the issue? Stash files are used by Splunk to serialise the events so that they can be indexed. The source can be overridden in the collect command. These are from two different reports - i... See more...
What is the issue? Stash files are used by Splunk to serialise the events so that they can be indexed. The source can be overridden in the collect command. These are from two different reports - if you are interested, you should look at the settings for those reports to see the differences in how they are sent to the summary index. As for the times, where the times are almost identical, this is likely to be due to a cron job, which potentially is from a unix-based system, whereas the sources with more different times look like they are from a Windows-based system, which doesn't usually have cron.
I have further investigated and seen that the Info_search_time for all the stash file is same. Please suggest any significance behind it ?  
Hi, I have business use case of creating an alert wherein it has to search and trigger if the condition is matched, this alert is cron scheduled at 1pm from Monday through Friday.   The query:... See more...
Hi, I have business use case of creating an alert wherein it has to search and trigger if the condition is matched, this alert is cron scheduled at 1pm from Monday through Friday.   The query: index=xyz | head 1 | eval month_year=strftime(now(),"%c") | table month_year   I work on IST zone, the splunk server is CST/CDT zone, but from the alert mail we can see that the search was executed on 1pm(13:00), but trigger time is 1:14 am CST, I received the alert mail on 11:44am IST. Actually I should receive the mail on 11pm IST, Please help me out there.     Thanks  
Can you give an example of what 45123 is supposed to be as a date? I can make a guess but it might be wrong which would waste everyone's time.
Hi, We are running many applications with JAVA_AGENT and out of those, a few applications are discovering the JDBC calls and a few are not discovering the JDBC backend. I even tried to manually add ... See more...
Hi, We are running many applications with JAVA_AGENT and out of those, a few applications are discovering the JDBC calls and a few are not discovering the JDBC backend. I even tried to manually add a custom discovery with rule of matching URL. What all other factors impacting this and anything needs to be changed? Have attached the samples here. Note: 1. Both apps are using same version of MySQL connector jar.  mysql-connector-java : 5.1.49 For the application where the JDBC details and queries are captured.  For the similar application where JDBC details are not captured. Note:  1. For the app where details are captured is a web-app where business-transaction are present. but for the app where this is not captured is a core java app without transaction. ( More like a monitoring app ). Does this have any impact? Thanks & Regards. Akshay
Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot... See more...
Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot below. Conversely, multiple stash files won't be created for other reports.   Report with multiple stash files. Report with no duplicate no stash files. Please provide me an assistance on this.
How can we ingest MDI logs to Splunk?
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 afte... See more...
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.887 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.951 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.982 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.008 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.013 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.024 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.041 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.079 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.097 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.146 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.170 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.190 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.257 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.292 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.327 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.425 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.522 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.528 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.549 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.551 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped.     How to disable this log? Does any error related this INFO log?
Shouldn't we be looking for xz-utils rather than xz-libs? like this source=package sourcetype=package NAME=xz-utils
Hi @PickleRick , i tried the query u suggested its working as expected. please find the below query. but my concern is we want to use this query as an alert, where condition as getperct >50  , putp... See more...
Hi @PickleRick , i tried the query u suggested its working as expected. please find the below query. but my concern is we want to use this query as an alert, where condition as getperct >50  , putperct >10 , deleteperct >80 trigger alert but when i give this 3 conditions its not working as expected, here alert should trigger even if one condition meets. |mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services |timechart span=1h values(Transaction_count) by services |autoregress get as old_get |autoregress get as old_put |autoregress get as old_delete |eval getperct=round(old_get/get*100,2) |eval putperct=round(old_put/put*100,2) |eval deleteperct=round(old_delete/delete*100,2) |table getperct putperct deleteperct