Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and ...
See more...
Splunk support came back and stated this is a known issue, and the 9.4.0 update has an issue with the Splunk DB Connect app. The work around was time consuming, but finally everything is back up and running. I had to manually go into: /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf ...and comment out each line with: tail_rising_column_init_ckpt_value checkpoint_key Then restart Splunk, then go into each INPUT config and manually reset the checkpoint value to what was recorded in the tail_rising_column_init_ckpt_value setting. Took forever, but after doing all that and another Splunk restart, only then did all the issues go away. Also noted that the 9.4.0 update removes the legacy tail_rising_column_init_ckpt_value from the db_inputs.conf file, as it is now stored in kvstore, and since kvstore has been updated with 9.4.0 update, that was the overall issue. Just yet another mess that Splunk updates have caused, but at least support is aware, and they are working hard to properly fix it.
At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk Enterprise Security, Splunk SOAR, and Splunk Attack Analyzer. We know just how eager the c...
See more...
At .conf24, we shared that we were in the process of integrating Cisco Talos threat intelligence into Splunk Enterprise Security, Splunk SOAR, and Splunk Attack Analyzer. We know just how eager the community has been to see these integrations come to fruition, so we’re thrilled to share that all of the integrations are live!
Now, Splunk Security (cloud) customers can directly leverage Cisco Talos’ invaluable threat intelligence through Cisco Talos Intelligence for Enterprise Security, the Cisco Talos Intelligence connector for Splunk SOAR, and as a globally enabled feature in Splunk Attack Analyzer — at no additional cost.
To learn more, read our blog “Harness the Power of Cisco Talos Threat Intelligence Across Splunk Security Products” and then check out the following:
Cisco Talos Intelligence for Enterprise Security: Current Splunk Enterprise Security (cloud) customers can download the Cisco Talos Intelligence for Enterprise Security app from Splunkbase here and find additional guidance on leveraging the app’s capabilities here.
Cisco Talos Intelligence connector for Splunk SOAR: The Cisco Talos Intelligence connector for Splunk SOAR is now pre-installed for all current Splunk SOAR (cloud) customers. Additional guidance on leveraging the connector’s capabilities is available here.
Cisco Talos Intelligence in Splunk Attack Analyzer: These capabilities are globally enabled for all Splunk Attack Analyzer customers and don’t require any extra apps, connectors, or configuration. Check out this blog for additional details.
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the ...
See more...
Splunk is trying find a timestamp in your events - unfortunately your account id look like the internal representation of a date time i.e. number of seconds since 1st Jan 1970, so Splunk assigns the timestamp accordingly
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whe...
See more...
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whenever we forward the data, it is already cooked, is it right?
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to...
See more...
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to receive logs from externa source as Service Providers or to concentrate logs from other Forwarders (heavy or Universal). It's frequently also used as syslog server, but also a UF can be used for the same purpose. So it's a conceptual definition, not a configuration, the only relevant configuration for an HF is log forwarding, Ciao. Giuseppe
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index you...
See more...
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index your data twice and you pay double license. About coocked data, by default all the HFs send coocked data, infact, if you need to apply transformations to your data, you have to put the conf files in the HFs. Anyway HFs send coocked data both with indexAndForward =True or indexAndForward = False, to send not coocked data you have to apply a different configuration in your outputs.conf, but in this case you give more jobs to your Indexers. Ciao. Giuseppe
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (http...
See more...
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (https://anthias.screenly.io/) This setup arguably works better than SplunkTV, as dashpib+ allows the dashboards to be accessible to anyone (so we can have anonymous access to selected dashboards), and Anthias can share other content than just Splunk. Also a Pi is a lot cheaper than a Apple box. A
root@ip-10-14-80-38:/opt/splunkforwarder/etc/system/local# ls
README inputs.conf outputs.conf server.conf user-seed.conf I don't see any props.conf here.
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl...
See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer?
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network te...
See more...
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network testing between indexers and CM. All ports that need to be opened between these components are opened and communications is between 0.5 to 1ms.
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: ...
See more...
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: <search_conditions1> [ search <search_conditions2> | head 1 | eval earliest=_time-300, latest=_time | fields earliest latest ]
| ... in this way you use the _time of the <search_conditions2> as latest and _time-300 seconds as earliest, to apply th the primary search that can be the same of the secondary or different. Ciao. Giuseppe
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise. Please share the props. Splunk defaults to one line per event, but that can't be chan...
See more...
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise. Please share the props. Splunk defaults to one line per event, but that can't be changed using props. Again, please share the props for this sourcetype.