All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACC... See more...
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACCOUNT\")", " AccountSize=$(du -sh . | awk '{print $1}')", " ProfilesSize=$(du -chd1 --exclude={events,segments,data_integrity,api} | tail -n1 | awk '{print $1}')", " NAT=$(curl -s ifconfig.me)", " echo \"AccountId: $AccountId, TotalSize: $AccountSize, ProfilesSize: $ProfilesSize\" >> \"$LOG_FILE\"", " fi", " done"   I have forwarded this log file to Splunk using the Splunk Forwarder. This script appends new log entries to the file after successfully completing each loop. However, I am not seeing the logs with the correct timestamps, as shown in the attached screenshot. The logs are from 2022, but I started sending them to Splunk on 17/01/2025. Additionally, the Splunk Forwarder is sending some logs as single-line events and others as multi-line events. Could you explain why this is happening?  
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purpos... See more...
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purposes. Datadog offers solutions, such as their AgentCheck package (https://docs.datadoghq.com/developers/custom_checks/write_agent_check/), and their repository (https://github.com/DataDog/integrations-core) provides several integrations for similar use cases. Is there an equivalent solution or approach available in Splunk for achieving this functionality? Looking forward to your suggestions and guidance! Thanks!
Yes, i'm also facing similar issue. not resolved.
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define yo... See more...
It’s exactly that way. you need to understand your situation. Even there is some technical details mentioned you must understand the big picture, if you really want to utilize that data. define your options select best option implement it In real business You cannot start directly from step 4 if you want to get working solution with reasonable costs.
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. Yo... See more...
Hi @mohsplunking , You cannot send syslog data directly to Splunk Cloud. You must use a Splunk Universal Forwarder or Splunk Heavy Forwarder to ingest the UDP syslog and send it to Splunk Cloud. You can see the documentation about  syslog on Splunk Cloud below. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/HowSplunkEnterprisehandlessyslogdata  
@mohsplunking  Set up a syslog forwarder to receive data from the SaaS application and then forward it to the Splunk Cloud. You can send data via HEC token also. 
Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is t... See more...
Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is that they have REST api where you can query that data via modular inputs. Time by time there could be some other options too. You should start with asking this from your SaaS vendor if they have any integration to Splunk. Also you could use google to found any other documentation for it. r. Ismo
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only ... See more...
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only has an option to send logs over syslog , how can this be achieved while using Splunk cloud. What are the options for Data input here. If someone can elaborate. Thanking you in advance, regards, Moh
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metada... See more...
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metadata the string time_subsecond=.123456 And interestingly enough the subsecond transform doesn't get triggered. My latest version is: [metadata_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.[0-9]+) FORMAT = $1$0 DEST_KEY = _raw But nothing seems to happen.
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allow... See more...
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allows Splunk to tail files and continuously read new data as it is appended to the file.  
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using be... See more...
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using below command netstat -an | grep 9997 If output of above command is blank, then your port is not open. You need to open it. 3. Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it. 4. Check if you are able to ping indexer from forwarder host ping indexer name If you are not able to ping to the server, then check network issue 5. Confirm on indexer if your file is already indexed or not by using the below search query In the Splunk UI, run the following search :- index=_internal "FileInputTracker" ** As output of the search query, you will get a list of log files indexed. 6. Check if forwarder has completed processing log file (i.e. tailing process by using below URL) https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus In tailing process output you can check if forwarder is having an issue for processing file 7. Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file 8. Verify inputs.conf and outputs.conf for proper configuration inputs.conf Make sure the following configuration is present, and verify that the specified path has read access. [monitor://<absolute path of the file to onboard>] index=<index name> sourcetype=<sourcetype name> outputs.conf [tcpout:group1] server=x.x.x.x:9997 9. Verify Index Creation Ensure the index is created on the indexer and matches the index specified in inputs.conf. 10. **Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer 11. Checkout disk space availability on the indexer 12. Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended) - ulimit is limit set by default in linux is limit for number files opened by a process - check ulimit command: ulimit -n - set ulimit command: ulimit -n expected size
Could you be more specific? I suggest you to install on RHEL8 because SOAR does not officially support RHEL9.
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your U... See more...
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your UF has correct inputs.conf on place. If 2nd one, then look that there is outputs.conf on place. 1st check those from server side. Then continue on UF side. Check that Splunk Forwarder service is running on it. Then look from splunkd.log what are happening there. Depending on your environment those conf files could be on UF or there could a DS which are managing all UFs.
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g j... See more...
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g json handling with different way one for indexing and another for search. This leads you to see duplicate events.
This is not an error message. It just informs you that this file has read. Is this totally new splunk environment or just a new uf which haven’t sent logs before to splunk.
@biwanari  Can you help me with the steps of installation of Splunk Soar <Free Trial/UN-Privileged> in RHEL Version 9
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help ... See more...
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help me in this > 01-17-2025 06:32:15.605 +0000 INFO TailReader [1654 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mo... See more...
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mode in same props.conf and push it to deployer (so that it will push to SHs) but it has line breaker bla bla in it. or should I create new app in deployer and then in local new props.conf and push it to SHs? And we need all data (all sourcetypes) to follow this KV_MODE=json... Is there any way I can give by default rather than specifying each sourcetype seperately?
Are you sure that you are using correct entitlement for SCP? If it is then ask help from your Splunk account manager.
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer ... See more...
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer to push this configuration to all search heads. Keep in mind that props.conf on Universal Forwarders has limited functionality. refer this  https://www.aplura.com/assets/pdf/where_to_put_props.pdf