All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is t... See more...
Hi you can send or pull data into SCP. It’s totally depends on what is your SaaS system where you try to get this data. Currently quite many SaaS can send data via HEC to splunk. Another option is that they have REST api where you can query that data via modular inputs. Time by time there could be some other options too. You should start with asking this from your SaaS vendor if they have any integration to Splunk. Also you could use google to found any other documentation for it. r. Ismo
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only ... See more...
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only has an option to send logs over syslog , how can this be achieved while using Splunk cloud. What are the options for Data input here. If someone can elaborate. Thanking you in advance, regards, Moh
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metada... See more...
I even set up the Windows box to emit metadata that matches the regex in the transform, because none of my logs seemed to have that subsecond data. Now my Windows test machine sends among its metadata the string time_subsecond=.123456 And interestingly enough the subsecond transform doesn't get triggered. My latest version is: [metadata_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.[0-9]+) FORMAT = $1$0 DEST_KEY = _raw But nothing seems to happen.
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allow... See more...
@loknath  The TailReader in Splunk is a component responsible for monitoring and collecting data written to the end of a file being monitored. It's part of the File Monitor Input feature, which allows Splunk to tail files and continuously read new data as it is appended to the file.  
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using be... See more...
@loknath  1. check if Splunk process is running on Splunk forwarder ps -ef | grep splunkd OR cd $SPLUNK HOME/bin ./splunk status 2. Check if Splunk forwarder forwarding port is open by using below command netstat -an | grep 9997 If output of above command is blank, then your port is not open. You need to open it. 3. Check on indexer if receiving is enabled on port 9997 and port 9997 is open on indexer Check if receiving is configured : on indexer, go to setting>>forwarding and receiving >> check if receiving is enabled on port 9997. If not, enable it. 4. Check if you are able to ping indexer from forwarder host ping indexer name If you are not able to ping to the server, then check network issue 5. Confirm on indexer if your file is already indexed or not by using the below search query In the Splunk UI, run the following search :- index=_internal "FileInputTracker" ** As output of the search query, you will get a list of log files indexed. 6. Check if forwarder has completed processing log file (i.e. tailing process by using below URL) https://splunk forwarder server name:8089/services/admin/inputstatus/TailingProcessor:FileStatus In tailing process output you can check if forwarder is having an issue for processing file 7. Check out log file permissions which you are sending to Splunk. Verify if Splunk user has access to log file 8. Verify inputs.conf and outputs.conf for proper configuration inputs.conf Make sure the following configuration is present, and verify that the specified path has read access. [monitor://<absolute path of the file to onboard>] index=<index name> sourcetype=<sourcetype name> outputs.conf [tcpout:group1] server=x.x.x.x:9997 9. Verify Index Creation Ensure the index is created on the indexer and matches the index specified in inputs.conf. 10. **Check splunkd.log on forwarder at location $SPLUNK_HOME/var/log/splunk for any errors. Like for messages that are from 'TcpOutputProc', they should give you an indication as to what is occurring when the forwarder tries to connect to the indexer 11. Checkout disk space availability on the indexer 12. Check out ulimit if you have installed forwarder on linux. and set it to unlimites or max (65535 -Splunk recommended) - ulimit is limit set by default in linux is limit for number files opened by a process - check ulimit command: ulimit -n - set ulimit command: ulimit -n expected size
Could you be more specific? I suggest you to install on RHEL8 because SOAR does not officially support RHEL9.
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your U... See more...
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your UF has correct inputs.conf on place. If 2nd one, then look that there is outputs.conf on place. 1st check those from server side. Then continue on UF side. Check that Splunk Forwarder service is running on it. Then look from splunkd.log what are happening there. Depending on your environment those conf files could be on UF or there could a DS which are managing all UFs.
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g j... See more...
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g json handling with different way one for indexing and another for search. This leads you to see duplicate events.
This is not an error message. It just informs you that this file has read. Is this totally new splunk environment or just a new uf which haven’t sent logs before to splunk.
@biwanari  Can you help me with the steps of installation of Splunk Soar <Free Trial/UN-Privileged> in RHEL Version 9
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help ... See more...
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help me in this > 01-17-2025 06:32:15.605 +0000 INFO TailReader [1654 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mo... See more...
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mode in same props.conf and push it to deployer (so that it will push to SHs) but it has line breaker bla bla in it. or should I create new app in deployer and then in local new props.conf and push it to SHs? And we need all data (all sourcetypes) to follow this KV_MODE=json... Is there any way I can give by default rather than specifying each sourcetype seperately?
Are you sure that you are using correct entitlement for SCP? If it is then ask help from your Splunk account manager.
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer ... See more...
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer to push this configuration to all search heads. Keep in mind that props.conf on Universal Forwarders has limited functionality. refer this  https://www.aplura.com/assets/pdf/where_to_put_props.pdf 
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, t... See more...
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, tried other browsers too
@loknath  To ensure proper monitoring, verify that the file you wish to track grants read access to the 'splunk' user.
@paleewawa  Better to assign the knowledge object to a user that has a role and give that role the quota it needs.  Check this one for work around: https://community.splunk.com/t5/Security/ERROR-Use... See more...
@paleewawa  Better to assign the knowledge object to a user that has a role and give that role the quota it needs.  Check this one for work around: https://community.splunk.com/t5/Security/ERROR-UserManagerPro-user-quot-system-quot-had-no-roles/m-p/309026 
@loknath   Verify the following details: Confirm whether the inputs.conf file is configured to point to the correct monitoring directory. Ensure that the index has been created on the indexer bef... See more...
@loknath   Verify the following details: Confirm whether the inputs.conf file is configured to point to the correct monitoring directory. Ensure that the index has been created on the indexer before sending data from the Universal Forwarder (UF). Check the connection between the UF and the indexer. Make sure the receiving port is enabled on the indexer. Review the internal logs on the Splunk UF to gather insights. Examine the outputs.conf file for correct configurations. Please review these details thoroughly.
Iam not able to see the file content in indexer,  After restarting the universal Forwarder what can be the reason 
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 see... See more...
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 seed=42 | stats count by action, partition_number Search error Error in 'sample' command: The specified field name for the partition already exists: partition_number   I tried providing different field name and it is still the same error. If I remove stats command and try running the same search multiple times, it works without any issues. What could be the reason ?