All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extr... See more...
No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extracting an indexed field. Ugly.
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true categ... See more...
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z  
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan... See more...
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan You can also do  | rest /services/data/indexes count=0 | table title maxTime minTime  
Is there any other way outside of using syslog? either way thanks for all of the inputs @PickleRick @gcusello. Just an exercise on my part. 
Did you actually read the docs? Especially those for the JDBC module?
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents ... See more...
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents of that field manually. But indeed as @bowesmana demonstrated (and I confirmed in my lab with %3N as well), you can just do %9N and it will still get only 6 digits and ignore the seventh one.
Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGE... See more...
Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGEST_EVAL = _time=strftime(strptime(_time, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")   Note : In my opinion, the parsing of the timestamp should be correct first so that we may convert using INGEST_EVAL. In my case, the time format ("%Y-%m-%dT%H:%M:%S.%9N%:z") is not parsing properly, which may cause an issue during timestamp conversion.
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHEC... See more...
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9N%:z TRANSFORMS-add_time = add_time transforms.conf [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")
This seems to be heavily broken. Even if I use %3N in your example I still get 6 digits parsed.
Hello I want to see all indexes latest data time. like when did the latest data came to this index.
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the tim... See more...
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the timezone. But you can't include it because there is no way to tell Splunk to "skip one character".So you'd have to make sure you have proper TZ set for this source. Alternatively you can use INGEST_EVAL I still think it would be easier if your source would push expkicit fimestamp alomg the event so you wouldn't have to parse it.
It would appear to work, so use %9N
It's a shame that people reply but don't offer any real help. Telling anyone to look at the specs does not help if you took the time to read the problem
Thanks @bowesmana for showcasing this example. So its that okay to skip micro seconds and good to use %9N in real time data flow?
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this... See more...
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this value at the latest with the STATUS value "FINISHED". However, when the events come into Splunk, getting the values with the STATUS value "RELEASED" without latest timestamp (UPDATED) What I am doing so far: I am using rising column method to get the data into Splunk to avoid duplicate in ingestion.    
Did you find the answer? I'm new to this platform and got stuck at the same problem. 
I have not used set diff very often, but the set diff command appears to also look at hidden fields and when doing inputlookup there is a field _mkv_child. It's possible that this is affecting the r... See more...
I have not used set diff very often, but the set diff command appears to also look at hidden fields and when doing inputlookup there is a field _mkv_child. It's possible that this is affecting the results. You could try | set diff [ | inputlookup test.csv | fields url | fields - _* ] [ | inputlookup test2.csv | fields url | fields - _* ] but you can also do it this way, which is really how lookups are intended to work - it's always good to avoid using join in Splunk - it has limitations and joining on lookups is just not how to do things in Splunk | inputlookup test2.csv | lookup test.csv url OUTPUT url as found | where isnull(found) So this takes test2 and looks for the urls present in test.csv and retains those not found.  
You did mention that field2 doesn't exist and that is exactly what fillnull will do. It will create a field in an event where there is no field for that event and it gives it the value you specify. ... See more...
You did mention that field2 doesn't exist and that is exactly what fillnull will do. It will create a field in an event where there is no field for that event and it gives it the value you specify. So when you say it didn't work, can you elaborate - what didn't work. field2 WILL be created if it does not exist in a log source where there is no field2 value, so top field1 field2 field3 field4 will not ignore results where field2 does not exist, because after fillnull, it will ALWAYS exist. Perhaps you can show examples of the data and your SPL
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=oute... See more...
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=outer url [| inputlookup test2.csv | eval is_test2_log=1] | where isnull(is_test2_log)     The two results are different and the actual correct answer is number 2. In case 1, there are 200 results, in case 2 there are 300 results. I don't know why the 2 results are different. Or even if they are different, shouldn't there be more results from number 1?  
Thanks for the reply but that didn't work; I should have mentioned that "field2" doesn't exist in the source data in some of the logs.   So some logs are: field1, field2, field3, field4 and others ... See more...
Thanks for the reply but that didn't work; I should have mentioned that "field2" doesn't exist in the source data in some of the logs.   So some logs are: field1, field2, field3, field4 and others are field1, field3, field4, So the header "field2" doesn't exist at all in some of the data.  I want to return result weather or not they have a "field2".