All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can avoid it ? I need correct time stamp on each event.
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and s... See more...
Why you want to do this? Splunk has designed to use cooked data between its components. If you really want to broke your installation you found instructions from outputs.conf, inputs.conf files and some articles and answers.
Splunk recognize your account id as a timestamp. When you are taking it as epoch number and convert it to human readable those are matching.
@isoutamo but if give same props.conf with KV_MODE=json and distribute it to both indexers and search heads, will it lead to duplication of events or is it fine? 
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel... See more...
Hi I am confused like what in logs iam getting is this below: timestamp:  environment: event :{ json format under that orginal : java.lang.throwable and list of errors host:  loglevel: log file: message: java.lang. throwable  iam getting this above type of data in logs when i search the index and this is logstash data coming in splunk in json format now iam confused with what to do with this data is this the data coming is fine or i can further filter anything from this  and get some other output out of it which can be meaningful for me. if there is any way of it then please share.
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and ... See more...
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and running. I had to manually go into: /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf ...and comment out each line with: tail_rising_column_init_ckpt_value checkpoint_key Then restart Splunk, then go into each INPUT config and manually reset the checkpoint value to what was recorded in the tail_rising_column_init_ckpt_value setting. Took forever, but after doing all that and another Splunk restart, only then did all the issues go away. Also noted that the 9.4.0 update removes the legacy tail_rising_column_init_ckpt_value from the db_inputs.conf file, as it is now stored in kvstore, and since kvstore has been updated with 9.4.0 update, that was the overall issue.   Just yet another mess that Splunk updates have caused, but at least support is aware, and they are working hard to properly fix it.
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the ... See more...
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the timestamp accordingly  
@sc_admin11  can you run btool and check props.conf  /opt/splunkforwarder/bin/splunk btool props list --debug
Great, so how do I configure the SH to send uncooked data?
Hi @danielbb , usually also the SH sends coocked data, only UFs, by default, send uncoocked data. Ciao. Giuseppe
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whe... See more...
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whenever we forward the data, it is already cooked, is it right? 
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to... See more...
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to receive logs from externa source as Service Providers or to concentrate logs from other Forwarders (heavy or Universal). It's frequently also used as syslog server, but also a UF can be used for the same purpose. So it's a conceptual definition, not a configuration, the only relevant configuration for an HF is log forwarding, Ciao. Giuseppe
That's great, but what defines in the configurations an HF to be an HF? 
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index you... See more...
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index your data twice and you pay double license. About coocked data, by default all the HFs send coocked data, infact, if you need to apply transformations to your data, you have to put the conf files in the HFs. Anyway HFs send coocked data both with indexAndForward =True or indexAndForward = False, to send not coocked data you have to apply a different configuration in your outputs.conf, but in this case you give more jobs to your Indexers. Ciao. Giuseppe
@_gkollias  It already set it to "0" (unlimited) , Is there anything I should update?
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (http... See more...
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (https://anthias.screenly.io/)   This setup arguably works better than SplunkTV, as dashpib+ allows the dashboards to be accessible to anyone (so we can have anonymous access to selected dashboards), and Anthias can share other content than just Splunk.   Also a Pi is a lot cheaper than a Apple box.   A  
root@ip-10-14-80-38:/opt/splunkforwarder/etc/system/local# ls README inputs.conf outputs.conf server.conf user-seed.conf I don't see any props.conf  here.
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl... See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer? 
Did you ever get this resolved? Seeing a similar issue.
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network te... See more...
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network testing between indexers and CM. All ports that need to be opened between these components are opened and communications is between 0.5 to 1ms.