All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (http... See more...
Hi We have moved to using dashpub+ in front of Splunk (https://conf.splunk.com/files/2024/slides/DEV1757C.pdf) And have a Raspberry Pi behind a Tv running the Anthias digital signage software (https://anthias.screenly.io/)   This setup arguably works better than SplunkTV, as dashpib+ allows the dashboards to be accessible to anyone (so we can have anonymous access to selected dashboards), and Anthias can share other content than just Splunk.   Also a Pi is a lot cheaper than a Apple box.   A  
root@ip-10-14-80-38:/opt/splunkforwarder/etc/system/local# ls README inputs.conf outputs.conf server.conf user-seed.conf I don't see any props.conf  here.
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two pl... See more...
It's not clear to me how indexAndForward works, the documentation says - "Set to 'true' to index all data locally, in addition to forwarding it." Does it mean that the data is being indexed in two places? if so, what should we do to produce cooked data AND forward it to the indexer? 
Did you ever get this resolved? Seeing a similar issue.
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network te... See more...
I am following up on this issue. Was it resolved? If so, what was the solution, as I am experiencing similar issue. We know it is not a networking issue on our end after going through some network testing between indexers and CM. All ports that need to be opened between these components are opened and communications is between 0.5 to 1ms.
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: ... See more...
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: <search_conditions1> [ search <search_conditions2> | head 1 | eval earliest=_time-300, latest=_time | fields earliest latest ] | ... in this way you use the _time of the <search_conditions2> as latest and _time-300 seconds as earliest, to apply th the primary search that can be the same of the secondary or different. Ciao. Giuseppe
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
@sc_admin11 To check the  single-line events and others as multi-line events, please share the props.conf file
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be chan... See more...
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be changed using props.  Again, please share the props for this sourcetype.
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf... See more...
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf specifies an age threshold for files. Splunk will ignore files older than the specified value when indexing new data. This is useful for avoiding unnecessary processing of stale data. [monitor;///<path of the file>] disabled = false index = <indexname> sourcetype = <sourcetype> ignoreOlderThan = 7d NOTE:  ignoreOlderThan = 7d: Splunk will ignore files older than 7 days. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons  
Is there a rest api available for Notable Suppression ? to get the suppresssion details and modify them via rest api
Thank you very much! You saved me a ton of time, I would never have thought to do it that way. This solution works great! 
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful... See more...
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful: https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela... It might be worth investigating why the file is so large, whether the data being logged is excessive, and if there are opportunities to optimize its size, rotation frequency, etc. Hope this helps!
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04... See more...
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate. 01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed I would greatly appreciate your assistance. Thank you.
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACC... See more...
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACCOUNT\")", " AccountSize=$(du -sh . | awk '{print $1}')", " ProfilesSize=$(du -chd1 --exclude={events,segments,data_integrity,api} | tail -n1 | awk '{print $1}')", " NAT=$(curl -s ifconfig.me)", " echo \"AccountId: $AccountId, TotalSize: $AccountSize, ProfilesSize: $ProfilesSize\" >> \"$LOG_FILE\"", " fi", " done"   I have forwarded this log file to Splunk using the Splunk Forwarder. This script appends new log entries to the file after successfully completing each loop. However, I am not seeing the logs with the correct timestamps, as shown in the attached screenshot. The logs are from 2022, but I started sending them to Splunk on 17/01/2025. Additionally, the Splunk Forwarder is sending some logs as single-line events and others as multi-line events. Could you explain why this is happening?  
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purpos... See more...
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purposes. Datadog offers solutions, such as their AgentCheck package (https://docs.datadoghq.com/developers/custom_checks/write_agent_check/), and their repository (https://github.com/DataDog/integrations-core) provides several integrations for similar use cases. Is there an equivalent solution or approach available in Splunk for achieving this functionality? Looking forward to your suggestions and guidance! Thanks!
Yes, i'm also facing similar issue. not resolved.
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define yo... See more...
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define your options select best option implement it In real business You cannot start directly from step 4 if you want to get working solution with reasonable costs.
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. Yo... See more...
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. You can see the documentation about  syslog on Splunk Cloud below. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/HowSplunkEnterprisehandlessyslogdata  
@mohsplunking  Set up a syslog forwarder to receive data from the SaaS application and then forward it to the Splunk Cloud. You can send data via HEC token also.