splunk index is flowing, but in application its not reflecting.
We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log, logs are visible for a few minutes but then stop showing up.
This behavior is inconsistent across environments — in some, logs are visible as expected, while in others, they're missing entirely. The Splunk index appears to be receiving the data, but it's not being reflected in the application UI.
We're not yet sure what’s causing this discrepancy and would appreciate any insights or assistance you can provide.
Hi @Priya
Please can you confirm, you say that logs stop appearing - is it that logs you were previously able to see are no longer visible? Or that logs start coming in (and still visible) but then stop arriving?
If logs are being indexed but not searchable for very long then this could indicate an issue with the indexes.conf configuration (e.g. archive/freezing too soon).
If logs start being indexed but then seem to pause (and the old logs are still available/visible in Splunk) then this seems to suggest a blockage either receiving the logs or sending the logs. What is the source of the logs? Can you check the _internal logs for any errors, specifically around ingestion? Can you see the _internal logs for the hosts sending your data?
Sorry for all the questions, but this will help understand the problem better and prevent too much speculation!
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing