All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Great, so how do I configure the SH to send uncooked data?
Hi @danielbb , usually also the SH sends coocked data, only UFs, by default, send uncoocked data. Ciao. Giuseppe
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whe... See more...
I got it, however, I'm setting these three machines and I would like the HF to send cooked data while the SH should send uncooked data to the indexer. Based on what you're saying, it appears that whenever we forward the data, it is already cooked, is it right? 
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to... See more...
Hi @danielbb , an HF is a Full Splunk instance where logs are forwarded to other Splunk instances and it isn't used for other roles (e.g. Seagc Heads, Cluster Manager, etc...). It's usually used to receive logs from externa source as Service Providers or to concentrate logs from other Forwarders (heavy or Universal). It's frequently also used as syslog server, but also a UF can be used for the same purpose. So it's a conceptual definition, not a configuration, the only relevant configuration for an HF is log forwarding, Ciao. Giuseppe
That's great, but what defines in the configurations an HF to be an HF? 
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index you... See more...
Hi @danielbb , if you need to execute local searches on the local data on the HF, you can use the indexAndForward option otherwise you don't need it. Obviously id you use this option, you index your data twice and you pay double license. About coocked data, by default all the HFs send coocked data, infact, if you need to apply transformations to your data, you have to put the conf files in the HFs. Anyway HFs send coocked data both with indexAndForward =True or indexAndForward = False, to send not coocked data you have to apply a different configuration in your outputs.conf, but in this case you give more jobs to your Indexers. Ciao. Giuseppe
@_gkollias  It already set it to "0" (unlimited) , Is there anything I should update?
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (http... See more...
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (https://anthias.screenly.io/)   This setup arguably works better than SplunkTV, as dashpib+ allows the dashboards to be accessible to anyone (so we can have anonymous access to selected dashboards), and Anthias can share other content than just Splunk.   Also a Pi is a lot cheaper than a Apple box.   A  
root@ip-10-14-80-38:/opt/splunkforwarder/etc/system/local# ls README inputs.conf outputs.conf server.conf user-seed.conf I don't see any props.conf  here.
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl... See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer? 
Did you ever get this resolved? Seeing a similar issue.
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network te... See more...
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network testing between indexers and CM. All ports that need to be opened between these components are opened and communications is between 0.5 to 1ms.
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: ... See more...
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: <search_conditions1> [ search <search_conditions2> | head 1 | eval earliest=_time-300, latest=_time | fields earliest latest ] | ... in this way you use the _time of the <search_conditions2> as latest and _time-300 seconds as earliest, to apply th the primary search that can be the same of the secondary or different. Ciao. Giuseppe
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
@sc_admin11 To check the  single-line events and others as multi-line events, please share the props.conf file
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be chan... See more...
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be changed using props.  Again, please share the props for this sourcetype.
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf... See more...
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf specifies an age threshold for files. Splunk will ignore files older than the specified value when indexing new data. This is useful for avoiding unnecessary processing of stale data. [monitor;///<path of the file>] disabled = false index = <indexname> sourcetype = <sourcetype> ignoreOlderThan = 7d NOTE:  ignoreOlderThan = 7d: Splunk will ignore files older than 7 days. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons  
Is there a rest api available for Notable Suppression ? to get the suppresssion details and modify them via rest api
Thank you very much! You saved me a ton of time, I would never have thought to do it that way. This solution works great! 
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful... See more...
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful: https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela... It might be worth investigating why the file is so large, whether the data being logged is excessive, and if there are opportunities to optimize its size, rotation frequency, etc. Hope this helps!