Getting Data In

indexed time vs eventtime odd issue

a238574
Path Finder

We have a couple of splunk envs running is aws. We rehydrated(deployed a new AMI) one of the env last week and this week I have run into a strange issue with the timing of indexed data. Before the rehydrate data was indexed typically within 5-10 mins of the eventtime. Now it appears that exactly one hour has been added to the time it takes to get the data indexed. I am at a loss to explain this??? The events are being forwarded from cloudtrail. No cloudtrail changes have occurred .

Here is a sample of events prior the env rehydration (getting a new AMI)

e_time i_time
06/20/18 08:27:18 06/20/18 08:31:40
06/20/18 08:27:03 06/20/18 08:31:40
06/20/18 08:26:48 06/20/18 08:31:40
06/20/18 08:26:32 06/20/18 08:31:40
06/20/18 05:00:14 06/20/18 05:11:13
06/20/18 04:37:59 06/20/18 04:49:45
06/20/18 03:01:46 06/20/18 03:09:51
06/20/18 02:58:34 06/20/18 03:09:51
06/20/18 03:25:55 06/20/18 03:31:40
06/20/18 03:25:39 06/20/18 03:31:40
06/20/18 03:25:36 06/20/18 03:31:40
06/20/18 03:25:21 06/20/18 03:31:40
06/20/18 03:25:20 06/20/18 03:31:40
06/20/18 00:47:21 06/20/18 00:59:58
06/19/18 23:43:47 06/19/18 23:51:38
06/19/18 23:43:31 06/19/18 23:51:38
06/19/18 23:43:31 06/19/18 23:51:38
06/19/18 21:00:13 06/19/18 21:07:14
06/19/18 20:59:58 06/19/18 21:07:14
06/19/18 20:59:43 06/19/18 21:07:14
06/19/18 20:59:28 06/19/18 21:07:14
06/19/18 20:59:27 06/19/18 21:07:14
06/19/18 19:42:55 06/19/18 19:47:33

After you can see the one hour delay

e_time i_time
06/29/18 06:32:10 06/29/18 07:35:49
06/29/18 06:29:23 06/29/18 07:35:49
06/29/18 06:28:48 06/29/18 07:35:49
06/29/18 06:28:38 06/29/18 07:35:49
06/29/18 06:28:26 06/29/18 07:35:49
06/29/18 05:40:20 06/29/18 06:46:07
06/29/18 05:40:05 06/29/18 06:46:07
06/29/18 05:39:50 06/29/18 06:46:07
06/29/18 05:39:34 06/29/18 06:46:07

Tags (1)
0 Karma
1 Solution

a238574
Path Finder

Found my answer. It was being delivered via a sqs s3 queue and there was an issue with an assigned iam role affecting one of the heavy forwarders. I single sqs-s3 input stream was not able to keep up with the traffic. Once we fixed the iam role assignment 2 input streams were able to drain the queue and the time delay went away

View solution in original post

0 Karma

a238574
Path Finder

Found my answer. It was being delivered via a sqs s3 queue and there was an issue with an assigned iam role affecting one of the heavy forwarders. I single sqs-s3 input stream was not able to keep up with the traffic. Once we fixed the iam role assignment 2 input streams were able to drain the queue and the time delay went away

0 Karma

FrankVl
Ultra Champion

Sounds like a timezone issue or something like that? Did the clock on the splunk server change due to the AMI update or so?

0 Karma

a238574
Path Finder

I checked all the servers... Time and timezone all appear to be set correctly

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...