All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KE... See more...
I have been experimenting further, and found the following... This is my latest test config: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond=(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(time_temp), "aaa" . _raw, "bbb" . _raw) #INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp . " " . _raw, time_temp . subsecond_temp . " " . _raw) Both md_time and md_subsecond are in the list, and before md_fix_subsecond. If in md_fix_subsecond I check for the null-ness of either time_temp or subsecond_temp, then they are both reported as null. So for some reason they are not available in the EVAL_RAW for some reason. And as they are both null, referencing them resulted in an error, and so no log was output. How could we resolve this?
I think that this time for creating a support case to splunk.
Create a props.conf stanza for the sourcetype that tells Splunk where the timestamp is and what it looks like.
If possible add real timestamp in your logs, then define in props.conf its place and format. Another option is define in props.conf that splunk must use current time for indexing.
Another thing which may help you is adding parallelIngestionPipelines > 1 in server.conf. This is not helping with individual files, but if there are many files then it could help.
How can avoid it ? I need correct time stamp on each event.
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and s... See more...
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and some articles and answers.
Splunk recognize your account id as a timestamp. When you are taking it as epoch number and convert it to human readable those are matching.
@isoutamo but if give same props.conf with KV_MODE=json and distribute it to both indexers and search heads, will it lead to duplication of events or is it fine? 
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel... See more...
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel: log file: message: java.lang. throwable  iam getting this above type of data in logs when i search the index and this is logstash data coming in splunk in json format now iam confused with what to do with this data is this the data coming is fine or i can further filter anything from this  and get some other output out of it which can be meaningful for me. if there is any way of it then please share.
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and ... See more...
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and running. I had to manually go into: /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf ...and comment out each line with: tail_rising_column_init_ckpt_value checkpoint_key Then restart Splunk, then go into each INPUT config and manually reset the checkpoint value to what was recorded in the tail_rising_column_init_ckpt_value setting. Took forever, but after doing all that and another Splunk restart, only then did all the issues go away. Also noted that the 9.4.0 update removes the legacy tail_rising_column_init_ckpt_value from the db_inputs.conf file, as it is now stored in kvstore, and since kvstore has been updated with 9.4.0 update, that was the overall issue.   Just yet another mess that Splunk updates have caused, but at least support is aware, and they are working hard to properly fix it.
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the ... See more...
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the timestamp accordingly  
@sc_admin11  can you run btool and check props.conf  /opt/splunkforwarder/bin/splunk btool props list --debug
Great, so how do I configure the SH to send uncooked data?
Hi @danielbb , usually also the SH sends coocked data, only UFs, by default, send uncoocked data. Ciao. Giuseppe
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whe... See more...
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whenever we forward the data, it is already cooked, is it right? 
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to... See more...
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to receive logs from externa source as Service Providers or to concentrate logs from other Forwarders (heavy or Universal). It's frequently also used as syslog server, but also a UF can be used for the same purpose. So it's a conceptual definition, not a configuration, the only relevant configuration for an HF is log forwarding, Ciao. Giuseppe
That's great, but what defines in the configurations an HF to be an HF? 
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index you... See more...
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index your data twice and you pay double license. About coocked data, by default all the HFs send coocked data, infact, if you need to apply transformations to your data, you have to put the conf files in the HFs. Anyway HFs send coocked data both with indexAndForward =True or indexAndForward = False, to send not coocked data you have to apply a different configuration in your outputs.conf, but in this case you give more jobs to your Indexers. Ciao. Giuseppe
@_gkollias  It already set it to "0" (unlimited) , Is there anything I should update?