All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGE... See more...
Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGEST_EVAL = _time=strftime(strptime(_time, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")   Note : In my opinion, the parsing of the timestamp should be correct first so that we may convert using INGEST_EVAL. In my case, the time format ("%Y-%m-%dT%H:%M:%S.%9N%:z") is not parsing properly, which may cause an issue during timestamp conversion.
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHEC... See more...
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9N%:z TRANSFORMS-add_time = add_time transforms.conf [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")
This seems to be heavily broken. Even if I use %3N in your example I still get 6 digits parsed.
Hello I want to see all indexes latest data time. like when did the latest data came to this index.
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the tim... See more...
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the timezone. But you can't include it because there is no way to tell Splunk to "skip one character".So you'd have to make sure you have proper TZ set for this source. Alternatively you can use INGEST_EVAL I still think it would be easier if your source would push expkicit fimestamp alomg the event so you wouldn't have to parse it.
It would appear to work, so use %9N
It's a shame that people reply but don't offer any real help. Telling anyone to look at the specs does not help if you took the time to read the problem
Thanks @bowesmana for showcasing this example. So its that okay to skip micro seconds and good to use %9N in real time data flow?
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this... See more...
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this value at the latest with the STATUS value "FINISHED". However, when the events come into Splunk, getting the values with the STATUS value "RELEASED" without latest timestamp (UPDATED) What I am doing so far: I am using rising column method to get the data into Splunk to avoid duplicate in ingestion.    
Did you find the answer? I'm new to this platform and got stuck at the same problem. 
I have not used set diff very often, but the set diff command appears to also look at hidden fields and when doing inputlookup there is a field _mkv_child. It's possible that this is affecting the r... See more...
I have not used set diff very often, but the set diff command appears to also look at hidden fields and when doing inputlookup there is a field _mkv_child. It's possible that this is affecting the results. You could try | set diff [ | inputlookup test.csv | fields url | fields - _* ] [ | inputlookup test2.csv | fields url | fields - _* ] but you can also do it this way, which is really how lookups are intended to work - it's always good to avoid using join in Splunk - it has limitations and joining on lookups is just not how to do things in Splunk | inputlookup test2.csv | lookup test.csv url OUTPUT url as found | where isnull(found) So this takes test2 and looks for the urls present in test.csv and retains those not found.  
You did mention that field2 doesn't exist and that is exactly what fillnull will do. It will create a field in an event where there is no field for that event and it gives it the value you specify. ... See more...
You did mention that field2 doesn't exist and that is exactly what fillnull will do. It will create a field in an event where there is no field for that event and it gives it the value you specify. So when you say it didn't work, can you elaborate - what didn't work. field2 WILL be created if it does not exist in a log source where there is no field2 value, so top field1 field2 field3 field4 will not ignore results where field2 does not exist, because after fillnull, it will ALWAYS exist. Perhaps you can show examples of the data and your SPL
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=oute... See more...
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=outer url [| inputlookup test2.csv | eval is_test2_log=1] | where isnull(is_test2_log)     The two results are different and the actual correct answer is number 2. In case 1, there are 200 results, in case 2 there are 300 results. I don't know why the 2 results are different. Or even if they are different, shouldn't there be more results from number 1?  
Thanks for the reply but that didn't work; I should have mentioned that "field2" doesn't exist in the source data in some of the logs.   So some logs are: field1, field2, field3, field4 and others ... See more...
Thanks for the reply but that didn't work; I should have mentioned that "field2" doesn't exist in the source data in some of the logs.   So some logs are: field1, field2, field3, field4 and others are field1, field3, field4, So the header "field2" doesn't exist at all in some of the data.  I want to return result weather or not they have a "field2".
@tem Did you ever find the fix to this? We are getting the same error “Failed to authenticate with gateway after 3 retries” and can not figure it out. Ours is with the Ontap add-on but it also uses t... See more...
@tem Did you ever find the fix to this? We are getting the same error “Failed to authenticate with gateway after 3 retries” and can not figure it out. Ours is with the Ontap add-on but it also uses the SA-Hydra app.
Generally knowledge bundle contains most of the content from the SH unless you blacklist some parts of it. Why not just deploy the apps to the indexer then you might ask. Two reasons. 1. Variabilit... See more...
Generally knowledge bundle contains most of the content from the SH unless you blacklist some parts of it. Why not just deploy the apps to the indexer then you might ask. Two reasons. 1. Variability of the KOs on the SHs - each time something changes on the SH (including users private objects) you'd have to deploy new apps 2. The same indexer(s) can be search peers for multiple different SH(C)s of which each can have separate set of search-time configs. Possibly conflicting with each other. So indexer-deployed apps are "active" in index time while objects replicated in a knowledge bundle are active in search time.
Yes, this is the confusing point. Did you mean if my search is: index = main eventtype=authentication This search will replicate the knowledge bundle which contains the relative Knowledge Object t... See more...
Yes, this is the confusing point. Did you mean if my search is: index = main eventtype=authentication This search will replicate the knowledge bundle which contains the relative Knowledge Object to the search itself not all the Knowledge Object which exists on the search head?   Knowledge bundle replication overview - Splunk Documentation "The process of knowledge bundle replication causes peers, by default, to receive nearly the entire contents of the search head's apps." Any explanation will be greatly appreciated!
Use | fillnull field2 value="" That will force all events with no field2 to have an empty value, rather than a null value. That's the normal way to force potentially null fields to exist when usin... See more...
Use | fillnull field2 value="" That will force all events with no field2 to have an empty value, rather than a null value. That's the normal way to force potentially null fields to exist when using them in split by clauses, or top, as in your case.
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get di... See more...
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get different results when I search if I include a "field2" in the results. Can I search and return all results weather or not "field2" exists? | top field1 = all possible results | top field1 field2 field3 = only results with all fields What I want is just to show a blank line where "field2" would be on matches that don't have a "field2". Basically make "field2" optional.
That's a start.  You'll also need maxVolumeDataSizeMB so Splunk knows how large the volume is.  Then each index definition needs to reference the volume. [volume:MyVolume] path = /some/file/path [M... See more...
That's a start.  You'll also need maxVolumeDataSizeMB so Splunk knows how large the volume is.  Then each index definition needs to reference the volume. [volume:MyVolume] path = /some/file/path [MyIndexSaturated] coldPath = volume:path/myindexsaturated/colddb homePath = volume:path/myindexsaturated/db thawedPath = $SPLUNK_DB/myindexsaturated/thaweddb frozenTimePeriodInSecs = 1209600