Installation

Lost date_* field from my data after porting my environment to production

jfunderburg
Explorer

I developed my Splunk environment in a lab complete with reports to find NON-Business hour logins. IT was working fine. I ported my environment to production and blew away the data in my indexes to start with a clean copy. It has been running fine for several months. I just noticed non of my non-business hour reports are catching events. I am utilizing: ate_hour>18 OR date_hour<7) OR (date_wday=Sunday OR date_wday=Saturday) in my SPL. it is the same report that tested successfully in my lab. Why are the date_* fields no longer working? could this be a configuration file on my indexers? What creates those fields?

-Jeff

Labels (1)
0 Karma

mattymo
Splunk Employee
Splunk Employee

Those fields are extracted by default as long as a timestamp is being extracted form your data by Splunk. (and yeah, like Badri suggested, make sure you check on verbose to be sure). I believe your timestamp recognition settings in the sourcetype in props.conf would be the only place that might affect this.

https://docs.splunk.com/Documentation/Splunk/7.0.0/Knowledge/Usedefaultfields

alt text

- MattyMo
0 Karma

jfunderburg
Explorer

Okay, so I have discovered all 6 of my splunk servers are properly creating the date_* fields. None of the clients I have are creating these fields. They all receive there configurations from a deployment server. I am running SPLUNK_TA_windows which is being pushed from a deployment server. The only files I have in the local directory are inputs.conf, outputs.conf and wmi.conf. Which means the props.conf and other config files are defaults. I have not edited any defaults... I am using the SAME TA on my splunk servers, my search head gets its config fro mthe deployment server also and it appears to be functioning properly. When I was running this is my lab it was also working fine. The issue is residing in my production environment. I am reviewing the docs you attached (thank you for that), where else should I look? Could the TA be corrupted on all my workstations? Should I delete it, let all clients remove the TA then get a clean copy installed? Should I run a command on a client to see if it is properly applying any of the confs?

I am completely lost as to why the splunk servers are creating the fields for there records but none of my other servers are. For reference this is windows security logs. I have a mixed 2008/2012/2016 environment. All my splunk servers are 2012. I have many other 2012 servers as well. the logs are gathered and transmitted utilizing Universal Forwarders. I am running 6.5.0 Enterprise and 6.5.0 Forwarders. I see all of the other Security log data, they are just missing the date_* field. I am trying to use a report I wrote for non-business hours logins. But it is only working for logins on splunk servers due to the other systems not receiving a date_* field.

0 Karma

mattymo
Splunk Employee
Splunk Employee

Can you share your inputs so we can see what exactly you are collecting? You may need the props on the forwarders if we are relying on INDEXED_EXTRACTIONS, as the UF actually handles that. Maybe deploy exactly what you have on your Splunk Server, including props...and see if that fixes it.

- MattyMo
0 Karma

jfunderburg
Explorer

I have to manually retype the inputs but I can do that if necessary, this is a closed network it is running on. So the UFs are running 6.5.0 and the splunk_TA_Windows the splunk_TA_WIndows is the same app used on the UFs and the Splunk enterprise servers. It was my understanding, (and my knowledge is spotty at best) that the UFs cannot do parsing and they rely on the Indexer to do the parsing for them. If so, that would explain the behavior and the indexer is just failing to parse the data. and each splunk Enterprise server is doing there own parsing. If I am incorrect on the UF not being able to parse, then I assume the porps.conf in the Splunk_Home/etc/apps/Splunk_TA_Windows/default directory would be handling the parsing. If so? could it be the windows TA App I am using off of splunkbase? I use that exact same app on my enterprise servers.

IF The UF IS unable to do parsing, how do I make my indexers do the parsing? Obviously they are NOT. The indexers have the SPLUNK_TA_Windows app running in SPLUNK_HOME/etc/slave-apps/SPLUNK_TA_WINDOWS (Of course a default and props.conf etc...) Do I need it in the APPS folder as well as the slave-apps if I want it to do the parsiong for my UFs? Is it capabale of doing the parsing for my UF? FYI, I have a 3 indexer cluster and I believe it has plenty of PROC power and Memory to handle the load.

0 Karma

jfunderburg
Explorer

I Use the EXACT SAME input.conf and SPLUNK_TA_WINDOWS app on the enterprise servers and forwarders (Splunk is installed on 2012R2 servers). My search head gets the same app from deployment server that all other UFs for windows are receiving (and search head is working), indexers get it from Index Master in his Master-Apps directory pushed as a configuration bundle. Again, all 6 splunk enterprise servers DO create date_* fields and I can query them. but my other machines using universal forwarders DO NOT. (Inputs.conf is in the local directory of SPLUNK_TA_Windows, (I do not edit ANY files in default, and Props is only in Default I did not copy it to local on any servers or clients)

Input is straight from the SPLUNK_TA_WINDOWS/default/inputs.conf except I set disabled to false and created a couple whitelist and blacklists to parse out extraneous log data. basically I am pulling the following wineventlogs (I am not pulling any monitors or performance data, only windows EVT/EVTX files)

[default]
evt_resolve_ad_obj = 1
evt_dc_name=asep.tsmil.mil

[WinEventLog://Application]
disabled = 0
start_from = oldest
current_only = 0
Whitelist = 16, 11707, 11724, 50, 51, 900, 901, 902, 903
Checkpointinterval = 5
index=wineventlog
renderxml=false

[WinEventLog://Windows Powershell]
disabled = 0
start_from = oldest
current_only = 0
Checkpointinterval = 5
index=wineventlog
renderxml=false

[WinEventLog://Security]
disabled = 0
start_from = oldest
current_only = 0
Whitelist = (A lot of specified event IDs (I am getting the evnt IDs and the events)
blacklist = (specific event IDs with messages = a regex expression where the data is too chatty and irrelevant.)
Checkpointinterval = 5
index=wineventlog
renderxml=false

[WinEventLog://System]
disabled = 0
start_from = oldest
current_only = 0
Whitelist = 16, 11707, 11724, 50, 51, 900, 901, 902, 903
Checkpointinterval = 5
index=wineventlog
renderxml=false

The rest of the default in input.con for SPLUNK_TA_Windows are disabled.

0 Karma

jfunderburg
Explorer

https://answers.splunk.com/answers/221233/why-are-date-fields-are-not-being-extracted-from-w.html

Is there any truth in the above link? Why do my enterprise servers running windows using the same inputs.con and Splunk_TA_windows work?

0 Karma

sbbadri
Motivator

@jfunderburg

Please run your query on smart mode or verbose mode not on fast mode.

0 Karma

jfunderburg
Explorer

Thank you for answering, I was Running Smart Mode, but Verbose mode is not working Either....

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...