Splunk Search

Fix datetime.xml file in SPlunk

muizash
Path Finder

So I have to update my datetime.xml file in Splunk because timestamp extraction problem after 1jan 2020.

According to splunk we have to override new file provided from them to existing file.

Now my question:
I have 10I, 20SH, 2HF, 1000's of UF.
Do i need to update datetime.xml on just my Heavy forwarders?
Do i need to update new datetime.xml on all indexers as well? If yes, Please help me how to push configuration from master.

Thanks

0 Karma

PavelP
Motivator

according to https://docs.splunk.com/Documentation/Splunk/8.0.2/ReleaseNotes/FixDatetimexml2020#Impact you need to patch UFs under the following known conditions:

  • When they have been configured to process structured data, such as CSV, XML, and JSON files, using the INDEXED_EXTRACTIONS setting in props.conf
  • When they have been configured to process data locally, using the force_local_processing setting in props.conf

if you don't local process on UF, you don't need to patch them

0 Karma

anmolpatel
Builder

@muizash yes, you will need to update the datetime xml on all the Splunk endpoints.
Option 1: Download the new datetime.xml and copy it to $SPLUNK_HOME/etc/. This will replace the exisiting datetime.xml file. After that you will need to restart the Splunk instance. Now this location cannot be touched by the deployment server, so you will need to push the files out using an alternative method on all your UF's.

Option 2: Upgrade the Splunk version you're running across all instance. The new install has the updated datetime.xml file

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...