All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team,   I have a standalone production instance and on which splunk ITSI was running. Daily ingestion:- 50gb Itsi license:- 20gn It's license got expired so my team purchased a new itsi licen... See more...
Hi team,   I have a standalone production instance and on which splunk ITSI was running. Daily ingestion:- 50gb Itsi license:- 20gn It's license got expired so my team purchased a new itsi license but when I am pushing the new licesnse, splunk is deleting the enterprise license and its throwing warning license exceed MY QUESTIONS ARE :- 1) HOW ITSI LICENSE IS CALCULATED? 2) STEPS TO CREATE A STACK OF LICENSE ON SPLUNK 3) CAN I CREATE A STACK OF ENTERPRISE AND ITSI LICESNSE?
Hi, The very important services_kpi_lookup kvstore got overwritten by a mistake when an operator wrote "|outputlookup services_kpi_lookup" instead of "|inputlookup services_kpi_lookup". This has h... See more...
Hi, The very important services_kpi_lookup kvstore got overwritten by a mistake when an operator wrote "|outputlookup services_kpi_lookup" instead of "|inputlookup services_kpi_lookup". This has had extremely big consequences. The ITSI environment does not work right now. It looks like we have no services, service templates, base searches, etc. Luckily enough the Splunk ITSI keeps backups for a week back by default. However, when we tried to restore to it we got a failed restore. This is our only chance to salvage our environment. It seems like it fails because our ITSI environment is so big. We had over 1000 services and over 8000 KPIs. When we read the logs we see that they say the following:     This last error is the one that we get stuck on right now. The restore from backup functionality seems to not work in our case and we do not know why. Any help would be appreciated.  Kind Regards, A Very Concerned Person
I am trying to count the employees per location during a particular shift and date. I'm pretty new to SPLUNK and I am approaching the searches like a SQL query. I can display them if I am not going t... See more...
I am trying to count the employees per location during a particular shift and date. I'm pretty new to SPLUNK and I am approaching the searches like a SQL query. I can display them if I am not going to count the employees using this. source=something1OR source=something2 host=somethinghost index=somethingindex sourcetype=csv | stats values(Location_id) as LocationID, values(Employee_ID) as EmpID, values(Shift_id) as Shift by Employee_ID,Date | table Date,LocationID,EmpID, Shift I tried replacing values in Employee_ID to Count and playing around with the "by" but the count ends of as 2 for all instead of 1. Shifts Shift_id Employee_ID Date 1 994163 8/15/2020 2 123456 8/15/2020 3 654321 8/15/2020 1 994163 8/16/2020 2 123456 8/16/2020 3 654321 8/16/2020 1 991234 8/16/2020   Locations Location_id Employee_ID Date L01 994163 8/15/2020 L02 123456 8/15/2020 L03 654321 8/15/2020 L01 994163 8/16/2020 L02 123456 8/16/2020 L03 654321 8/16/2020 L07 991234 8/16/2020 desired output date LocationID count Shift 8/15/2020 L02 1 2 8/15/2020 L03 1 3 8/15/2020 L01 1 1 8/16/2020 L02 1 2 8/16/2020 L03 1 3 8/16/2020 L07 1 1 8/16/2020 L01 1 1 what i am getting date LocationID count Shift 8/15/2020 L02 2 2 8/15/2020 L03 2 3 8/15/2020 L01 2 1 8/16/2020 L02 2 2 8/16/2020 L03 2 3 8/16/2020 L07 2 1 8/16/2020 L01 2 1
We have the following SPL query which generates statuses (i.e. "Success", "Failure", "Warn") for various different "services" (these are basically files being transferred from a source location to a ... See more...
We have the following SPL query which generates statuses (i.e. "Success", "Failure", "Warn") for various different "services" (these are basically files being transferred from a source location to a target location). The requirement is to be able to show the statuses for these various services over a 7 day period since this SPL query shall be used as part of a dashboard to monitor file transfer statuses on a daily basis. The issue we are running into is our inability to generate a value of "Not Run" for the date columns where no transactions occurred. Basically, there is no timestamp for the cron fields nor for the epoch_file_info_created_at/modified_at fields.  Currently, we are trying to see how we can populate the blank/non-existent values with an output of "Not Run". Is there a way for us to accomplish this? Here is a sample output which includes a column with blank/non-existent values:  Service ID Service Name Priority Service Area Source Target 2020-08-15 2020-08-14 2020-08-13 100 File1 2 SA1 Source1 Target2 Success   Not Run 105 File2 1 SA2 Source2 Target1 Warn   Success   Here are some raw logs with dummy data as well: 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:42:55.746+0000 2020-08-15 13:42:55.746, md5hash="hash1", file_info_rule_id="200", file_id="25", file_name="File1", location="/file/location", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:42:55.746+00", file_info_modified_at="2020-08-15 13:42:58.377+00", filesize="12 MB", id="8", interface_id="Int1", integration_name="FileName1", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 9  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="5", file_type=".csv", expected_rows_counts="90000", priority="Medium", sonc_metadata_created_at="2020-07-13 15:47:30.83+00", sonc_metadata_modified_at="2020-07-22 13:07:04.732+00", sonc_metadata_rule_id="200", s3_folder_path="file/location2", long_processing_time="0.5", service_area="SA1" 2020-08-15T09:41:30.663+0000 2020-08-15 13:41:30.663, md5hash="hash2", file_info_rule_id="225", file_id="24", file_name="filename1", location="file/location/3", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:41:30.663+00", file_info_modified_at="2020-08-15 13:41:33.373+00", filesize="12 MB", id="14", interface_id="INT3", integration_name="FileType4", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 8  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="7", file_type=".x12", priority="High", sonc_metadata_created_at="2020-07-20 15:12:45.625+00", sonc_metadata_modified_at="2020-07-22 13:16:29.969+00", sonc_metadata_rule_id="225", s3_folder_path="file/location/new", long_processing_time="0.5", service_area="SA3" 2020-08-15T09:41:30.663+0000 2020-08-15 13:41:30.663, md5hash="hash2", file_info_rule_id="225", file_id="24", file_name="filename1", location="file/location/3", source_sys_name="Source1", target_sys_name="Target1", validation_status="VALID", transfer_status="Successful", verification_status="Verified", file_info_created_at="2020-08-15 13:41:30.663+00", file_info_modified_at="2020-08-15 13:41:33.373+00", filesize="12 MB", id="14", interface_id="INT3", integration_name="FileType4", source_sys_id="6", target_sys_id="10", early_file_delivery_time="0 8  * * *", late_file_delivery_time="0 16  * * *", recurrence_pattern="0 15  * * *", processing_duration="0.25", short_processing_time="0.05", expected_file_size_threshold="10-15", validation_type="7", file_type=".x12", priority="High", sonc_metadata_created_at="2020-07-20 15:12:45.625+00", sonc_metadata_modified_at="2020-07-22 13:16:29.969+00", sonc_metadata_rule_id="225", s3_folder_path="file/location/new", long_processing_time="0.5", service_area="SA3"   Here is the SPL: index=hcnc_rds_db sourcetype=rds_test | eval epoch_file_info_created_at=strptime(file_info_created_at, "%Y-%m-%d %H:%M:%S.%3Q")-14400, epoch_file_info_modified_at=strptime(file_info_modified_at, "%Y-%m-%d %H:%M:%S.%3Q")-14400 | streamstats latest(*) as * by interface_id, epoch_file_info_created_at | eval early_start_epoch=relative_time(epoch_file_info_created_at,"@d"), late_start_epoch=relative_time(epoch_file_info_created_at,"@d"), recurrence_pattern_epoch=relative_time(epoch_file_info_created_at,"@d") | croniter iterations=1 input=early_file_delivery_time start_epoch=early_start_epoch | rename croniter_return as epoch_early_file | croniter iterations=1 input=late_file_delivery_time start_epoch=late_start_epoch | rename croniter_return as epoch_late_file | croniter iterations=1 input=recurrence_pattern start_epoch=recurrence_pattern_epoch | rename croniter_return as epoch_recurrence | fields - early_file_delivery_time | eval epoch_early_file=epoch_early_file+14400, epoch_late_file=epoch_late_file+14400, epoch_recurrence=epoch_recurrence+14400 | fieldformat epoch_early_file =strftime(epoch_early_file, "%Y-%m-%d %H:%M:%S.%3Q") | fieldformat epoch_late_file =strftime(epoch_late_file, "%Y-%m-%d %H:%M:%S.%3Q") | fieldformat epoch_recurrence =strftime(epoch_recurrence, "%Y-%m-%d %H:%M:%S.%3Q") | eval date_reference=strftime(epoch_file_info_created_at, "%Y-%m-%d"), process_time=epoch_file_info_modified_at-epoch_file_info_created_at | eval results=case(validation_error="FILE_TYPE_MISMATCH", "Failure", epoch_file_info_created_at>epoch_early_file AND epoch_file_info_created_at<epoch_late_file, "Success", epoch_file_info_created_at<epoch_early_file, "Warn", process_time>300 AND process_time<1800, "Success", 1=1, "Not Run") | eval combined=interface_id."@".integration_name."@".priority."@".service_area."@".source_sys_name."@".target_sys_name."@" | xyseries combined date_reference results | rex field=combined "^(?<interface_id>[^\@]+)\@(?<integration_name>[^\@]+)\@(?<priority>[^\@]+)\@(?<service_area>[^\@]+)\@(?<source_sys_name>[^\@]+)\@(?<target_sys_name>[^\@]+)\@$" | fillnull value="Not Run" | eval alert_sent=if(isnull(alert_sent), "No", alert_sent) | eval status=if(isnull(status), "Not Run", status) | eval ticket_id=if(isnull(ticket_id), "No", ticket_id) | table interface_id, integration_name, priority, service_area, source_sys_name, target_sys_name [ makeresults | addinfo | eval time = mvappend(relative_time(info_min_time,"@d"),relative_time(info_max_time,"@d")) | fields time | mvexpand time | makecontinuous time span=1d | eval time=strftime(time,"%F") | reverse | stats list(time) as time | return $time ] | rename interface_id as "Service ID", integration_name as "Service Name", priority as "Priority", service_area as "Service Area", source_sys_name as "Source", target_sys_name as "Target"  
Hello Splunkers I have an IIS log I need to open and search through every 15 minutes. If I see 10 consecutive occurences of the field ErorrCode= within a five minute period I want to trigger an aler... See more...
Hello Splunkers I have an IIS log I need to open and search through every 15 minutes. If I see 10 consecutive occurences of the field ErorrCode= within a five minute period I want to trigger an alert. ErrorCode=  Will be populated with different error codes like this ErrorCode=T1234.  I dont want to count individual error codes just the field ErrorCode.  I have been able to seach for occurences using alerts but the requirement if that they have to be consecutive occurences.   Here is an example snippet  ErrorCode can return several different values.  We dont care about the individual codes we just want to trigger when  the term ErrorCode appears in 10 consecutive lines of the log during a time window of 5 minutes LogTypeID="x", InfoSourceID="x", ErrorText="xxxxxxxx", ErrorCode="XXXXXX", ErrorDescription="xxxxxxxx", IISServerName="xxxxxx", CreatedDate="2020-08-14 10:19:34.557", CreatedBy="xxxxxx", MemberShipID="xxxxxx", RegisterRequestTime="2020-08-14 10:16:06.0", AppCode="xxxxxx", InvalidAppCode="xxxxxx,"   any help would be appreciated thanks John
Hi  i need assistance in extracting domain from url received in ironport logs,url received in mimecast logs i need the regex where we get only domain excluding :portnumber ,http/https/www.    
Hello  I have a query, when i give keyword "error" am getting the data from indexes A & B . But when i want the data from index "c" i need to give the search as "error index=c" then this gives the d... See more...
Hello  I have a query, when i give keyword "error" am getting the data from indexes A & B . But when i want the data from index "c" i need to give the search as "error index=c" then this gives the data of the index c Can i get the data of all the three indexes A,B & C when i give the word "error"? Many thanks for the help.  
Can anyone help why we are seeing these WARN in logs and how to fix permanently. We are performing manual resync whenever count of events is > 5 in 15mins time range using below query: index=_int... See more...
Can anyone help why we are seeing these WARN in logs and how to fix permanently. We are performing manual resync whenever count of events is > 5 in 15mins time range using below query: index=_internal host=searchhead* component=ConfReplicationThread log_level=WARN "Cannot accept push" |bin span=15m _time|stats max(consecutiveErrors) as count by host,_time|where count>5 LOGS: ========== WARN ConfReplicationThread - Error pushing configurations to captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=8d89fca5ef4520b00b8ffe8b1366a178b92b52fb; current_baseline_op_id=a948f0e3f0fcae707ce37ca7d7a73" ConfReplicationThread - Error pushing configurations to captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in acceptPush: Non-200 status_code=400: ConfReplicationException: Cannot accept push with outdated_baseline_op_id=66098bdc22c2bcacf951fb104558db365ac64820; current_baseline_op_id=085e675e4c9d8c9fafabee" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ConfReplicationThread - Error pulling configurations from captain=https://searchhead01.domain.com:8089, consecutiveErrors=1 msg="Error in fetchFrom, at=a6a747e7138353bd07873f04fe90f2c9b4564567: Network-layer error: Connect Timeout" ============= Even tried to reduce the max_push count to 50 (default is 100). How can we resolve this permanently ? ============== WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=1525! Consider a lower value of conf_replication_max_push_count in server.conf on all members. WARN ConfMetrics - single_action=PUSH_TO took wallclock_ms=2644! Consider a lower value of conf_replication_max_push_count in server.conf on all members. WARN ConfMetrics - single_action=PULL_FROM took wallclock_ms=2011! Consider a lower value of conf_replication_max_pull_count in server.conf on all members. WARN ConfMetrics - single_action=PULL_FROM took wallclock_ms=1778! Consider a lower value of conf_replication_max_pull_count in server.conf on all members.   Below are the settings in server.conf conf_replication_max_push_count = 50 conf_replication_purge.period = 3h conf_replication_period = 10   we do not want to do resync everytime. splunk resync shcluster-replicated-config
Currently, in our environment, any notable event that triggers would result in an automatic email sent to a distribution list. Our Management Team is interested in knowing if Splunk has any alternati... See more...
Currently, in our environment, any notable event that triggers would result in an automatic email sent to a distribution list. Our Management Team is interested in knowing if Splunk has any alternative ways of notifying individuals of notable events rather than email notifications. They raised a concern that if our email domain is hijacked or subjected to a DDoS attack, would there be another destination for Splunk to send alerts to? I was thinking that, if the email server were hijacked, we would just kill our SMTP connection. That may be the fastest way of dealing with such an issue. However, I'm open to hearing if Splunk does have the ability to send alerts to any destination other than email.
hi can someone help me with this error message? will it be because of this file and its size? can i delete it?
Good afternoon, Is there a CLI command or a search I can perform to see how much data storage is being utilized by the excess buckets in my index cluster? Thank you!
Hi, I have a Splunk8 server over a CentOS8 and I installed ISE app. I have configured Splunk to receive syslogs from ISE server but not all of the panels display data after I triggered many syslogs f... See more...
Hi, I have a Splunk8 server over a CentOS8 and I installed ISE app. I have configured Splunk to receive syslogs from ISE server but not all of the panels display data after I triggered many syslogs from devices pointing to ISE. As I said, some panels display information but others, which should, are not working as expected, @jconger I see that if I need to update each eventtype to include the index in order to make them work but I don't find the correct steps for this case.   Hope these details help, thanks!
Looking at Zoom log timestamps... I'm trying to figure out timestamps (and accuracy of _time). The Zoom 'add-on' scene is a little confusing: There is the "Splunk Connect for Zoom" app (https://splu... See more...
Looking at Zoom log timestamps... I'm trying to figure out timestamps (and accuracy of _time). The Zoom 'add-on' scene is a little confusing: There is the "Splunk Connect for Zoom" app (https://splunkbase.splunk.com/app/4961/)  which is listed as an 'add-on', but it has no timestamp recognition config (no props.conf at all).  Looking at Splunk Add-on for RWI - Executive Dashboard (https://splunkbase.splunk.com/app/5063/) - this *does* have a props.conf and zoom-specific configurations... but... the only thing related to timestamps are some search-time field extractions. No timestamp recognition configs. The search-time extractions are date-time strings, not epoch-time values... and are not exhaustive.  (See SPL below for analysis / comparison of timestamp values -- including these extracted fields.) I'm seeing that timestamp=none is getting assigned to every event, so that means timestamp recognition is being attempted and, presumably, failing. Which suggests that the _time value (when recognition fails) is the same as _indextime. I'm also seeing 'min' latency values of ~-18000 seconds (suggesting Splunk is occasionally successfully recognizing a timestamp, but not getting the timezone right); and positive latency of ~74,000 seconds. More evidence that Splunk is occasionally recognizing a timestamp... but not accurately. Zoom timestamp / latency diagnostic My question: Given the issues we're seeing, and the variation in timestamps in events (see analysis below), what do the developers of the add-ons (or Splunk or Zoom) recommend as an approach to accuracy of _time?  See SPL to drive analysis of your events based on grouping (stats) by event_type type event: index="<yourzoomindex>" | regex _raw = "time|start|end" | eval indextime = strftime(_indextime,"%+") `comment("NOTE: timestamp=none is a result of Splunk's timestamp parsing; occurs when it can't find (parse) a timestamp. ")` | fillnull value="-" event_type type event | stats count count(payload.time_stamp) AS payload.time_stamp count(payload.object.date_time) AS object.date_time count(payload.object.start_time) AS object.start_time count(start_time) AS start_time count(payload.object.end_time) AS object.end_time count(end_time) AS end_time count(update_time) AS update_time count(payload.object.timezone) AS object.timezone count(payload.object.occurrences{}.start_time) AS occurrences.start_time count(payload.object.recurrence.end_date_time) AS recurrence.end_date_time count(payload.object.participant.join_time) AS participant.join_time count(join_time) AS join_time count(payload.object.participant.leave_time) AS participant.leave_time count(leave_time) AS leave_time count(payload.object.participant.sharing_details.date_time) AS participant.sharing_details.date_time count(payload.object.recording_file*.recording_start) AS recording_file*.recording_start count(payload.object.recording_file*.recording_end) AS recording_file*.recording_end first(_raw) AS sample_event by event_type type event
When running searches on the Monitoring Console, we are getting duplicates...Pretty much every result is doubled.   Env:  DMC + 10 SHs in SHC + 230 Indexers in Cluster (RF=3, SF=2) DMC with 7.3.3 ... See more...
When running searches on the Monitoring Console, we are getting duplicates...Pretty much every result is doubled.   Env:  DMC + 10 SHs in SHC + 230 Indexers in Cluster (RF=3, SF=2) DMC with 7.3.3 + 16GB + 12 Cores   Below is the one we found the symptoms first time.  index=_internal source=*license_usage.log* type=RolloverSummary   against past 7days/ When I run the same search on a production SH it gives exactly 1/2 in usage which we believe correct. Why is it happening to DMC?  
Hello, I have created a custom alert action app as specified in the Splunk development documentation. I also created another more basic app to test and am still having the same issue. The issue is t... See more...
Hello, I have created a custom alert action app as specified in the Splunk development documentation. I also created another more basic app to test and am still having the same issue. The issue is that the custom alert action ui params are not saving. If I manually add the action.testalert.param.subject to the entry in savedseraches.conf it will get passed to my script, but when I save the parmas from the UI they do not get back to the config file. alert_actions.conf   [testalert] is_custom = 1 label = Test Alert payload_format = json # A default param param.foo = Bar     The alert action UI   <splunk-control-group label="Subject"> <splunk-text-input name="action.testalert.param.subject" id="subject"> </splunk-text-input> </splunk-control-group>     I have tried including a savedsearches.conf.spec to define the custom param with no effect and have also attempted changing the permissions/exports in default.meta with no success. Any idea what is going on?
I have a very small app that is installed in our Splunk Cloud instance. in props.conf I have [access_combined] REPORT-access_combined = REPORT-extract_app_from_source and in transforms.conf [REP... See more...
I have a very small app that is installed in our Splunk Cloud instance. in props.conf I have [access_combined] REPORT-access_combined = REPORT-extract_app_from_source and in transforms.conf [REPORT-extract_app_from_source] SOURCE_KEY = source REGEX = [regex to extract app attribute] This has been working perfectly to extract the variable app from the source during searches. I have made a separate unrelated update to the app and now I am getting a failure when Splunk Cloud is vetting the app. check_pretrained_sourcetypes_have_only_allowed_transforms - Only TRANSFORMS- or SEDCMD options are allowed for pretrained sourcetypes. What is now the correct way to add these search time attributes?
A forwarder which was working before has stopped for up to a month now. After checking, it is confirmed that the forwarder is correctly configured, the service is running and reporting to the indexer... See more...
A forwarder which was working before has stopped for up to a month now. After checking, it is confirmed that the forwarder is correctly configured, the service is running and reporting to the indexer, however there is no data showing on the indexer side or when a search is run on splunk  for that forwarder. Can someone please help me in the right direction?
Trying to fine tune Suspected Network Scanning since we are getting lots of false positives for our AD server doing DNS lookups and endpoints going to external sites that use lots of Akamai related I... See more...
Trying to fine tune Suspected Network Scanning since we are getting lots of false positives for our AD server doing DNS lookups and endpoints going to external sites that use lots of Akamai related IPs.  We have the threshold set as 500 (see below) but wondering if we can make the scanning more fruitful by excluding our AD servers doing DNS lookups (port 53) and to exclude all external IPs in our search/query.   I'm assuming we want  Network Scanning to really only look at internal IPs (private ranges).  New to Splunk so please forgive my lack of knowledge.  Thanks!! | tstats summariesonly=t allow_old_summaries=t dc(All_Traffic.dest_port) as num_dest_port dc(All_Traffic.dest_ip) as num_dest_ip from datamodel=Network_Traffic by All_Traffic.src_ip | rename "All_Traffic.*" as "*" | where num_dest_port > 500 OR num_dest_ip > 500 | sort - num_dest_ip
We have a CEF output that isn't sending events via the cefout command. We can take the scheduled search and just remove the end with the |cefout and CEF events are generated. <Datamodel search> ... See more...
We have a CEF output that isn't sending events via the cefout command. We can take the scheduled search and just remove the end with the |cefout and CEF events are generated. <Datamodel search> | stats ... |cefout ... For some unknown reason they don't get sent when the cefout command is included. If we use only the first query before stats command, it works and sends events with the |cefout. Adding in the | stats and subsequent lines causes it to not send events.
Hello All , We have a splunk server running on CentOS 6 and other server running on RHEL 6 ,  We are planning to upgrade the OS of CentOS from 6 to 7 and RHEL 6 to 7 , IS their will be anything to d... See more...
Hello All , We have a splunk server running on CentOS 6 and other server running on RHEL 6 ,  We are planning to upgrade the OS of CentOS from 6 to 7 and RHEL 6 to 7 , IS their will be anything to do with splunk like i need to stop the service during upgrade. Except this is their anything to do ? Or is it will be create any Impact to Splunk ? Should be dere any OS flavour compatibility needs to check before upgrade ?