All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: ... See more...
Hi @danielbb , do you want to display all the events pior one specified event or all the events that match the same conditions?. Anyway, the approach is defining the latest time with a subsearch: <search_conditions1> [ search <search_conditions2> | head 1 | eval earliest=_time-300, latest=_time | fields earliest latest ] | ... in this way you use the _time of the <search_conditions2> as latest and _time-300 seconds as earliest, to apply th the primary search that can be the same of the secondary or different. Ciao. Giuseppe
We have a case where we can search and find events that match the search criteria. The client would like to see the events that are prior in time to the one that we matched via the SPL. Can we do that?
@sc_admin11 To check the  single-line events and others as multi-line events, please share the props.conf file
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be chan... See more...
If the logs are from 2022 they should be timestamped as 2022 unless the props for the sourcetype say otherwise.  Please share the props. Splunk defaults to one line per event, but that can't be changed using props.  Again, please share the props for this sourcetype.
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf... See more...
@sc_admin11 Could you please share the `inputs.conf` and `props.conf` files? Additionally, try using the `ignoreOlderThan` attribute in the `inputs.conf`. The ignoreOlderThan setting in inputs.conf specifies an age threshold for files. Splunk will ignore files older than the specified value when indexing new data. This is useful for avoiding unnecessary processing of stale data. [monitor;///<path of the file>] disabled = false index = <indexname> sourcetype = <sourcetype> ignoreOlderThan = 7d NOTE:  ignoreOlderThan = 7d: Splunk will ignore files older than 7 days. ignoreOlderThan = <non-negative integer>[s|m|h|d] * The monitor input compares the modification time on files it encounters with the current time. If the time elapsed since the modification time is greater than the value in this setting, Splunk software puts the file on the ignore list. * Files on the ignore list are not checked again until the Splunk platform restarts, or the file monitoring subsystem is reconfigured. This is true even if the file becomes newer again at a later time. * Reconfigurations occur when changes are made to monitor or batch inputs through Splunk Web or the command line. * Use 'ignoreOlderThan' to increase file monitoring performance when monitoring a directory hierarchy that contains many older, unchanging files, and when removing or adding a file to the deny list from the monitoring location is not a reasonable option. * Do NOT select a time that files you want to read could reach in age, even temporarily. Take potential downtime into consideration! * Suggested value: 14d, which means 2 weeks * For example, a time window in significant numbers of days or small numbers of weeks are probably reasonable choices. * If you need a time window in small numbers of days or hours, there are other approaches to consider for performant monitoring beyond the scope of this setting. * NOTE: Most modern Windows file access APIs do not update file modification time while the file is open and being actively written to. Windows delays updating modification time until the file is closed. Therefore you might have to choose a larger time window on Windows hosts where files may be open for long time periods. * Value must be: <number><unit>. For example, "7d" indicates one week. * Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds). * No default, meaning there is no threshold and no files are ignored for modification time reasons  
Is there a rest api available for Notable Suppression ? to get the suppresssion details and modify them via rest api
Thank you very much! You saved me a ton of time, I would never have thought to do it that way. This solution works great! 
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful... See more...
Hi Rakzskull, what is the maxKBps set to in limits.conf on the Forwarder where you see this message? By default it is 256. If possible, you could try increasing thruput. Also, this doc may be helpful: https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela... It might be worth investigating why the file is so large, whether the data being logged is excessive, and if there are opportunities to optimize its size, rotation frequency, etc. Hope this helps!
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04... See more...
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate. 01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed I would greatly appreciate your assistance. Thank you.
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACC... See more...
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACCOUNT\")", " AccountSize=$(du -sh . | awk '{print $1}')", " ProfilesSize=$(du -chd1 --exclude={events,segments,data_integrity,api} | tail -n1 | awk '{print $1}')", " NAT=$(curl -s ifconfig.me)", " echo \"AccountId: $AccountId, TotalSize: $AccountSize, ProfilesSize: $ProfilesSize\" >> \"$LOG_FILE\"", " fi", " done"   I have forwarded this log file to Splunk using the Splunk Forwarder. This script appends new log entries to the file after successfully completing each loop. However, I am not seeing the logs with the correct timestamps, as shown in the attached screenshot. The logs are from 2022, but I started sending them to Splunk on 17/01/2025. Additionally, the Splunk Forwarder is sending some logs as single-line events and others as multi-line events. Could you explain why this is happening?  
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purpos... See more...
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purposes. Datadog offers solutions, such as their AgentCheck package (https://docs.datadoghq.com/developers/custom_checks/write_agent_check/), and their repository (https://github.com/DataDog/integrations-core) provides several integrations for similar use cases. Is there an equivalent solution or approach available in Splunk for achieving this functionality? Looking forward to your suggestions and guidance! Thanks!
Yes, i'm also facing similar issue. not resolved.
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define yo... See more...
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define your options select best option implement it In real business You cannot start directly from step 4 if you want to get working solution with reasonable costs.
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. Yo... See more...
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. You can see the documentation about  syslog on Splunk Cloud below. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/HowSplunkEnterprisehandlessyslogdata  
@mohsplunking  Set up a syslog forwarder to receive data from the SaaS application and then forward it to the Splunk Cloud. You can send data via HEC token also. 
Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is t... See more...
Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is that they have REST api where you can query that data via modular inputs. Time by time there could be some other options too. You should start with asking this from your SaaS vendor if they have any integration to Splunk. Also you could use google to found any other documentation for it. r. Ismo
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only ... See more...
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only has an option to send logs over syslog , how can this be achieved while using Splunk cloud. What are the options for Data input here. If someone can elaborate. Thanking you in advance, regards, Moh
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metada... See more...
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metadata the string time_subsecond=.123456 And interestingly enough the subsecond transform doesn't get triggered. My latest version is: [metadata_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.[0-9]+) FORMAT = $1$0 DEST_KEY = _raw But nothing seems to happen.
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allow... See more...
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allows Splunk to tail files and continuously read new data as it is appended to the file.  
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using be... See more...
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using below command netstat -an | grep 9997 If output of above command is blank, then your port is not open. You need to open it. 3. Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it. 4. Check if you are able to ping indexer from forwarder host ping indexer name If you are not able to ping to the server, then check network issue 5. Confirm on indexer if your file is already indexed or not by using the below search query In the Splunk UI, run the following search :- index=_internal "FileInputTracker" ** As output of the search query, you will get a list of log files indexed. 6. Check if forwarder has completed processing log file (i.e. tailing process by using below URL) https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus In tailing process output you can check if forwarder is having an issue for processing file 7. Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file 8. Verify inputs.conf and outputs.conf for proper configuration inputs.conf Make sure the following configuration is present, and verify that the specified path has read access. [monitor://<absolute path of the file to onboard>] index=<index name> sourcetype=<sourcetype name> outputs.conf [tcpout:group1] server=x.x.x.x:9997 9. Verify Index Creation Ensure the index is created on the indexer and matches the index specified in inputs.conf. 10. **Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer 11. Checkout disk space availability on the indexer 12. Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended) - ulimit is limit set by default in linux is limit for number files opened by a process - check ulimit command: ulimit -n - set ulimit command: ulimit -n expected size