All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .t... See more...
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .tgz forwarder package, and extracted it to the /opt directory. Currently, the folder /opt/splunkforwarder is created, and its contents are accessible. I have navigated to the /opt/splunkforwarder/bin directory, and now I want to execute a single command to: Agree to the license without prompts, and Set the admin username and password. I found a reference for a similar approach in Windows, where the following command is used: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet However, I couldn't find a single equivalent command for Linux that accomplishes all these steps together. Could you please provide the exact command to achieve this on Linux?  
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for t... See more...
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for the ingested records, the same TASKID will not be ingested again. Hence the discrepancy between the DB contents and the indexed data.
Ah, so your problem was actually _not_ the same as the original one. That's why there is rarely a point to digging out old threads
thanks but this colors the background of the cell i need to color the font only
Is someone can support me on this topic ?
Thank you for your reply. I've solved the problem. It is related to the wrong definition of my Macro "cim_Network_Resolution_indexes"
And do you have acceleration enabled on this datamodel? The summariesonly=true option tells Splunk to only use accelerated summaries for searching, not the raw events.
No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extr... See more...
No. It will not work. strftime() renders the time to a string. Leave the INGEST_EVAL alone But seriously, you'd have to get that value using json_extract, text functions or earlier transform extracting an indexed field. Ugly.
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true categ... See more...
Hi @PickleRick, Just to inform you. I am using below props setting in my Prod env. But still I can see discrepancies.  DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z  
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan... See more...
You can do | tstats max(_time) as latest where index=* by index but depending on the time range you use for the search it will only return data for those indexes that have data during that timespan You can also do  | rest /services/data/indexes count=0 | table title maxTime minTime  
Is there any other way outside of using syslog? either way thanks for all of the inputs @PickleRick @gcusello. Just an exercise on my part. 
Did you actually read the docs? Especially those for the JDBC module?
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents ... See more...
1. If you just use the same time format it makes no sense to moving your extraction to INGEST_EVAL 2. During index time you don't have search-time extracted fields so you'd have to get the contents of that field manually. But indeed as @bowesmana demonstrated (and I confirmed in my lab with %3N as well), you can just do %9N and it will still get only 6 digits and ignore the seventh one.
Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGE... See more...
Hi @PickleRick  I have corrected transforms.conf from [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ") to   [add_time] INGEST_EVAL = _time=strftime(strptime(_time, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")   Note : In my opinion, the parsing of the timestamp should be correct first so that we may convert using INGEST_EVAL. In my case, the time format ("%Y-%m-%dT%H:%M:%S.%9N%:z") is not parsing properly, which may cause an issue during timestamp conversion.
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHEC... See more...
Hi @PickleRick, I have tried below workaround but still timestamp is not converting as per my requirement. props.conf [timestamp_change] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TIME_PREFIX = \"@timestamp\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9N%:z TRANSFORMS-add_time = add_time transforms.conf [add_time] INGEST_EVAL = _time=strftime(strptime(@timestamp, "%Y-%m-%dT%H:%M:%S.%9N%:z"), "%Y-%m-%dT%H:%M:%S.%QZ")
This seems to be heavily broken. Even if I use %3N in your example I still get 6 digits parsed.
Hello I want to see all indexes latest data time. like when did the latest data came to this index.
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the tim... See more...
The possible problem in situation such as yours is that you have seven digits of partial second and then you have time zone specifier. So if you cut your timestamp at %6N you won't be parsing the timezone. But you can't include it because there is no way to tell Splunk to "skip one character".So you'd have to make sure you have proper TZ set for this source. Alternatively you can use INGEST_EVAL I still think it would be easier if your source would push expkicit fimestamp alomg the event so you wouldn't have to parse it.
It would appear to work, so use %9N
It's a shame that people reply but don't offer any real help. Telling anyone to look at the specs does not help if you took the time to read the problem