Monitoring Splunk

Late indexed

uagraw01
Motivator

Hello Splunkers!!

Issue Description
We are experiencing a significant delay in data ingestion (>10 hours) for one index in Project B within our Splunk environment. Interestingly, Project A, which operates with a nearly identical configuration, does not exhibit this issue, and data ingestion occurs as expected.

Steps Taken to Diagnose the Issue
To identify the root cause of the delayed ingestion in Project B, the following checks were performed:

Timezone Consistency: Verified that the timezone settings on the database server (source of the data) and the Splunk server are identical, ruling out timestamp misalignment.

Props Configuration: Confirmed that the props.conf settings align with the event patterns, ensuring proper event parsing and processing.

System Performance: Monitored CPU performance on the Splunk server and found no resource bottlenecks or excessive load.

Note : Configuration Comparison: Conducted a thorough comparison of configurations between Project A and Project B, including inputs, outputs, and indexing settings, and found no apparent differences.

Observations
The issue is isolated to Project B, despite both projects sharing similar configurations and infrastructure.

Project A processes data without delays, indicating that the Splunk environment and database connectivity are generally functional.

Screenshot 1 :

uagraw01_0-1745211229899.png

Screenshot 2 :

uagraw01_1-1745211284326.png

Event sample :

TIMESTAMP="2025-04-17T21:17:05.868000Z",SOURCE="TransportControllerManager_x.onStatusChangedTransferRequest",IDEVENT="1312670",EVENTTYPEKEY="TRFREQ_CANCELLED",INSTANCEID="210002100",OBJECTTYPE="TRANSFERREQUEST",OPERATOR="1",OPERATORID="1",TASKID="10030391534",TSULABEL="309360376000158328"

props.conf

[wmc_events]

CHARSET=AUTO

KV_MODE=AUTO

SHOULD_LINEMERGE=false

description= WMC events received from the Oracle database, formatted as key-value pairs

pulldown_type=true

TIME_PREFIX = ^TIMESTAMP=

TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ

TZ = UTC

NO_BINARY_CHECK = true

TRUNCATE = 10000000

#MAX_EVENTS = 100000

ANNOTATE_PUNCT = false

 

 

Labels (1)
0 Karma
1 Solution

uagraw01
Motivator

Hi @PickleRick , Just an update to you.

We have identified and resolved an issue related to a time discrepancy in our system, which was caused by the Oracle server's timezone configuration. The server was set to local time instead of UTC, resulting in a 10-hour time difference that affected [specific process, application, or data].

To address this, we have reconfigured the Oracle server to use UTC as the standard timezone, ensuring consistency and alignment with our operational requirements. This change has eliminated the time discrepancy, and all affected processes are now functioning as expected.

View solution in original post

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @uagraw01 

Its very suspicious that the time looks to be ~ - 36000 seconds - being pretty much *exactly* 10 hours.

Could there be an issue with timezones here? It doesnt sound like the data is blocked for exactly 10 hours in the data ingestion pipeline, it feels more likely that a previous server in the ingestion journey has an incorrect timezone. 

The timezone set in props.conf for the given sourcetype (TZ=) will be used based on the "the timezone provided by the forwarder." - Its worth checking if the forwarder in Project B, assuming this is different to Project A?

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

uagraw01
Motivator

@livehybrid Both the servers are in the same timezone. As I already compared the timezone setting on Project A and B.

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Could this be that the file you are monitoring on the database server has not been closed / flushed so the forwarder is unaware of any updates until later?

0 Karma

uagraw01
Motivator

@ITWhisperer  Data flushing is enabled for the required tables.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Wait a second. File or table? What kind of source does this data come from? Monitor input? Dbconnect?

Have you checked the actual data with someone responsible for the source? I mean whether the ID or whatever it is in your data corresponds to the right timestamp?

0 Karma

uagraw01
Motivator

@PickleRick A Python script is designed to establish a connection with the Oracle database, extract data from designated tables, and forward the retrieved data into Splunk for ingestion.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Does the script send to HEC or write to a file? If HEC - which endpoint?

0 Karma

uagraw01
Motivator

Hi @PickleRick , Just an update to you.

We have identified and resolved an issue related to a time discrepancy in our system, which was caused by the Oracle server's timezone configuration. The server was set to local time instead of UTC, resulting in a 10-hour time difference that affected [specific process, application, or data].

To address this, we have reconfigured the Oracle server to use UTC as the standard timezone, ensuring consistency and alignment with our operational requirements. This change has eliminated the time discrepancy, and all affected processes are now functioning as expected.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...