Archive

Hunk timestamp problem

Path Finder

Hello there guys!

We have here a Yarn Hadoop cluster running fine and also we have Hunk 6.4.0 running, which connects to our haddop cluster and can make queries on it.
If I do a query like this: index=fs_vindex | dedup computerName | table computerName on All Time, I get a list with all servers that are sending logs to the location specified for this vindex.
But, if I change the time picker to a specified amount of time (today or whatever), the query won't work and, worse, I don't see any errors at the logs.

I've searched a lot, but I still can't find a solution, hope anyone can help me.
Here comes the configuration files:

indexes.conf

[fs_vindex]
vix.description = Index virtual do FS
vix.input.1.et.format = yyyyMMddHH
vix.input.1.et.regex = /storage/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*
vix.input.1.lt.format = yyyyMMddHH
vix.input.1.lt.regex = /storage/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*
vix.input.1.path = /storage/data/BR_SIEM_success.EventLogFS/hourly/...
vix.provider = hadoop_producao
vix.provider.description = Ambiente
vix.input.1.lt.offset = 3600

[provider:hadoop_producao]
vix.command.arg.3 = $SPLUNK_HOME/bin/jars/SplunkMR-hy2.jar
vix.description = Ambiente
vix.env.HADOOP_HOME = /usr/lib/hadoop
vix.env.JAVA_HOME = /opt/java
vix.family = hadoop
vix.fs.default.name = hdfs://srvmasterofpuppets:8020
vix.mapreduce.framework.name = yarn
vix.output.buckets.max.network.bandwidth = 0
vix.splunk.home.hdfs = /user/hunk
vix.hadoop.security.authorization = 0
vix.splunk.impersonation = 0

Thank you in advance!

Tags (1)
0 Karma
1 Solution

Path Finder

Hello there @kschon and @Ledion (and for anyone else that should come to this problem)

I've finally solved the problem: the reason that Hunk wasn't able to parse the timestamp field is because it had the string type. I've changed it to long now and as a trick of magic, Hunk started to understand and parse it.
Hope I can help anyone else facing this problem.

Best Regards!

View solution in original post

0 Karma

Path Finder

Hello there @kschon and @Ledion (and for anyone else that should come to this problem)

I've finally solved the problem: the reason that Hunk wasn't able to parse the timestamp field is because it had the string type. I've changed it to long now and as a trick of magic, Hunk started to understand and parse it.
Hope I can help anyone else facing this problem.

Best Regards!

View solution in original post

0 Karma

Path Finder

Hello @burwell, sorry for the late answer, I was traveling and couldn't answer before.
Well, my sources here are some jsons generated by some other softwares that I control.
So, to fix this, we change the timestamp field at the json schema from string to long. For example:
We had:
timestamp: "276257257257265"
Now we have:
timestamp: 276257257257265
I don't really know if this kind of change can be done on Splunk before the timestamp recognition phase, maybe @kschon could tell us 😄

0 Karma

Splunk Employee
Splunk Employee

I believe you can handle this case via a calculated field. For example, you could add this to props.conf:

[]
EVAL-_time = strptime(timestamp, "%s")

However "%s" expects a 10-digit epoch time string, and it looks like you have more precision than that, so you would probably need to use substr too. Since you can change the generating schema, that's probably a lot simpler.

0 Karma

SplunkTrust
SplunkTrust

I've been having timestamp problems too and see my timestamps are strings as well. Please do share how you worked around this issue exactly. Thanks!

0 Karma

Splunk Employee
Splunk Employee

I'm glad that it's working! For my own information, how did you change the field type? Is it a change in the sourcetype?

0 Karma

Path Finder

Hello there @kschon!
No, I've changed it "before" hunk, on my data source, that is a json generated by our Big Data bus, so I only changed the field schema from string to long 🙂

0 Karma

Splunk Employee
Splunk Employee

I notice that your directory name indicates that it contains files for hour=15, which I believe means 3pm-4pm, and your files have a last-modified date in the range 6pm-7pm, which would be hour=18. This might be because the process creating the files is using a different timezone than the process which moves the files to HDFS. Do the events in those files also have timestamps with hour=15 or hour=18? If the latter, then you need to either change the timezone, as Ledion suggested, or else use an earliest time offset of 10800 (i.e. 3 hours) and a latest time offset of 14400 (i.e. 4 hours).

0 Karma

Path Finder

Hello there @kschon! The events have hour=15. As I said, I tried to use timezones GMT -3, GMT and even GMT +3, but it won't work. About the time offsets, it is used to consider the difference between modification time and event time?
Maybe you could help me?
I can't find more detailed information about this at the knowledge base.

0 Karma

Splunk Employee
Splunk Employee

Last mod time usually is not used for anything. I was just making an inference about the events in those files.

For each file in a virtual index, the "earliest time" and "latest time" should correspond to the timestamps of the first and last events in the file, respectively. They are used to determined whether Hunk needs to read that file. If the file presumably has no events in the time range of the query, then we might as well skip it. But the actual timestamp of each event in that file is determined according to the event's contents and the event's sourcetype. (The file's last modification time might be used if no other information is available.) An event whose timestamp is outside the time-range of the search will still be rejected, no matter what "et" and "lt" its file has. So if "et" and "lt" are not configured correctly, then Hunk will incorrectly reject the files containing events it wants, and correctly reject events from the files it reads.

An offset determines the difference between a time read from a file's path, and the correct value. In your example stanza at the top, you use the same value for "vix.input.1.et.regex" as "vix.input.1.lt.regex", and the same value for "vix.input.1.et.format" as "vix.input.1.lt.format". Without an offset, et would be the same as lt for each file. Adding "vix.input.1.lt.offset = 3600" means that the presumed latest time for the file will be one hour after the earliest time. In my example, I just added 3 hours to both.

0 Karma

Path Finder

Hello again @kschon!

Well, I've got what you mean about the offset, well, my files will hold only some minutes, as inside each dir we have 1 hour (as the configuration that follows below).
Even though, I've configured the et.offset to 3h and the the lt.offset to 4h, also used GMT and GMT-3 timezones, but it still doesn't work. So I played with time for the offset entrances, but still haven't got any improvement.

I also saw here something that I believe that was an error: I had 5~ virtual indexes, and all of them had the same "vix.xxx.1.xxx.xxx", so I thought that this should be causing the problems. I've excluded all the virtual indexes, except for below.

[wintel_fs_index]
vix.description = Index virtual do FS
vix.input.1.et.format = yyyyMMddHH
vix.input.1.et.regex = /storage/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*
vix.input.1.et.offset = 3600
vix.input.1.et.timezone = America/Sao_Paulo
vix.input.1.lt.format = yyyyMMddHH
vix.input.1.lt.regex = /openbus/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*
vix.input.1.lt.offset = 3600
vix.input.1.path = /openbus/data/BR_SIEM_success.EventLogFS/hourly/...
vix.input.1.lt.timezone = America/Sao_Paulo
vix.provider = hadoop_producao
vix.provider.description = Ambiente

But, I'm still not being able to generate outputs from time ranged queries,

0 Karma

Splunk Employee
Splunk Employee

Are the events time stamped correctly? ie does the event timestamp correspond to the path of the file where the event is found?

Unrelated, I'd recommend using the following more efficient search to get a table of computerName (and a count of events)

index=fs_vindex | stats count BY computerName 
0 Karma

Path Finder

Hello @Ledion.
I've played with the timestamp adjustment, but without success.
Our data is on BRT Timezone (GMT -3), but our HDFS Timezone is GMT, so I can see that our data has it's directory, on the same Timezone that the event's hour, but the files written on HDFS have +3 hours (GMT).
So, at the Virtual Index configuration, I've used the GMT, GMT +3, GMT -3 and some others, but none of them seemed to work for us.
Maybe I'm doing the time extraction on a wrong way?

vix.input.1.et.format = yyyyMMddHH
vix.input.1.et.regex = /storage/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*
vix.input.1.lt.format = yyyyMMddHH
vix.input.1.lt.regex = /storage/data/BR_SIEM_success.EventLogFS/hourly/(\d{4})/(\d{2})/(\d{2})/(\d{2})/.*

I think that I'm missing something, any advices will help us a lot!
Ahh, this is what I get on "Explore Data":

storage/data/BR_SIEM_success.EventLogFS/hourly/2016/05/09/15/
Type    Name    Owner   Size    Permissions Last Modified Time 
BR_SIEM_success.0.0.66.73582308.1462816800000.avro  hdfs    56.32 KB    rw-r--r--   May 9, 2016 6:01:36 PM
BR_SIEM_success.10.10.21.76501680.1462816800000.avro    hdfs    19.82 KB    rw-r--r--   May 9, 2016 6:01:49 PM
BR_SIEM_success.11.11.42.75316419.1462816800000.avro    hdfs    37.62 KB    rw-r--r--   May 9, 2016 6:01:59 PM
BR_SIEM_success.2.2.23.80555652.1462816800000.avro  hdfs    20.89 KB    rw-r--r--   May 9, 2016 6:02:36 PM
BR_SIEM_success.3.3.92.80099611.1462816800000.avro  hdfs    77.57 KB    rw-r--r--   May 9, 2016 6:01:49 PM
BR_SIEM_success.5.5.23.78514890.1462816800000.avro  hdfs    20.66 KB    rw-r--r--   May 9, 2016 6:01:54 PM
BR_SIEM_success.7.7.21.78649173.1462816800000.avro  hdfs    19.22 KB    rw-r--r--   May 9, 2016 6:01:59 PM
BR_SIEM_success.8.8.44.74151067.1462816800000.avro  hdfs    38.13 KB    rw-r--r--   May 9, 2016 6:01:54 PM
0 Karma

Path Finder

Hello there @Ledion! Yes, there are events and the timestamp is correct, I can see them when I run the All Time query and when I navigate with the "Explore Data".
I didn't use the stats, because I don't need the count of event, I only need the name of each server, is it really faster? Is it because it's only one operation and the way I'm doing I'm using two?

Thanks

0 Karma

Splunk Employee
Splunk Employee

I assume your don't see any events if you run a search over a given time period (hour, day, week etc) - correct? If so this is generally an indication that the time extracted from the events is different from the one extracted from the path. Is there a timezone difference between the path and Splunk's or your user's setting in Splunk?

I highly recommend that you do specify the timezone of the extracted time from the path

vix.input.[N].et/lt.timezone - the timezone in which to interpret the extracted time. E.g. "America/Los_Angeles" or "GMT-8:00", 

see http://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html for more info on the supported strings

0 Karma

Path Finder

Great Ledion, now I got what you said, going to check that and report later!

0 Karma