All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Could you be more specific? I suggest you to install on RHEL8 because SOAR does not officially support RHEL9.
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your U... See more...
If this has worked earlier then it sounds like you have lost you UF’s configurations. Is this issue only with one source/inputs or with all, including internal logs? If 1st one, then look that your UF has correct inputs.conf on place. If 2nd one, then look that there is outputs.conf on place. 1st check those from server side. Then continue on UF side. Check that Splunk Forwarder service is running on it. Then look from splunkd.log what are happening there. Depending on your environment those conf files could be on UF or there could a DS which are managing all UFs.
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g j... See more...
You can deploy the same props.conf to all nodes if you want. Each node use that part of it which have configuration which affects its behavior. Of course you must ensure that you don’t set twice e.g json handling with different way one for indexing and another for search. This leads you to see duplicate events.
This is not an error message. It just informs you that this file has read. Is this totally new splunk environment or just a new uf which haven’t sent logs before to splunk.
@biwanari  Can you help me with the steps of installation of Splunk Soar <Free Trial/UN-Privileged> in RHEL Version 9
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help ... See more...
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help me in this > 01-17-2025 06:32:15.605 +0000 INFO TailReader [1654 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mo... See more...
Hi @kiran_panchavat , We already have props.conf for same sourcetype in a app in DS which we push to manager node and manager will distribute to indexers.  Now my question is can I include my kv_mode in same props.conf and push it to deployer (so that it will push to SHs) but it has line breaker bla bla in it. or should I create new app in deployer and then in local new props.conf and push it to SHs? And we need all data (all sourcetypes) to follow this KV_MODE=json... Is there any way I can give by default rather than specifying each sourcetype seperately?
Are you sure that you are using correct entitlement for SCP? If it is then ask help from your Splunk account manager.
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer ... See more...
@splunklearner  To extract key-value pairs from JSON data during searches, configure props.conf with KV_MODE=JSON. If you have a Splunk deployment with a Search Head Cluster (SHC), use the deployer to push this configuration to all search heads. Keep in mind that props.conf on Universal Forwarders has limited functionality. refer this  https://www.aplura.com/assets/pdf/where_to_put_props.pdf 
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, t... See more...
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, tried other browsers too
@loknath  To ensure proper monitoring, verify that the file you wish to track grants read access to the 'splunk' user.
@paleewawa  Better to assign the knowledge object to a user that has a role and give that role the quota it needs.  Check this one for work around: https://community.splunk.com/t5/Security/ERROR-Use... See more...
@paleewawa  Better to assign the knowledge object to a user that has a role and give that role the quota it needs.  Check this one for work around: https://community.splunk.com/t5/Security/ERROR-UserManagerPro-user-quot-system-quot-had-no-roles/m-p/309026 
@loknath   Verify the following details: Confirm whether the inputs.conf file is configured to point to the correct monitoring directory. Ensure that the index has been created on the indexer bef... See more...
@loknath   Verify the following details: Confirm whether the inputs.conf file is configured to point to the correct monitoring directory. Ensure that the index has been created on the indexer before sending data from the Universal Forwarder (UF). Check the connection between the UF and the indexer. Make sure the receiving port is enabled on the indexer. Review the internal logs on the Splunk UF to gather insights. Examine the outputs.conf file for correct configurations. Please review these details thoroughly.
Iam not able to see the file content in indexer,  After restarting the universal Forwarder what can be the reason 
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 see... See more...
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 seed=42 | stats count by action, partition_number Search error Error in 'sample' command: The specified field name for the partition already exists: partition_number   I tried providing different field name and it is still the same error. If I remove stats command and try running the same search multiple times, it works without any issues. What could be the reason ?  
Hi @Afterimage, There is an option to force the dropdown to re-run the search every X seconds. Go into the Edit view and add this to the search: <refresh>30</refresh> <refreshType>delay</refreshTy... See more...
Hi @Afterimage, There is an option to force the dropdown to re-run the search every X seconds. Go into the Edit view and add this to the search: <refresh>30</refresh> <refreshType>delay</refreshType> Like this: That will make the lookup search re-run every 30 seconds, picking up any new values in the process.   -Spav
If you have JS that is creating the new POA&M, then you could set a token in the JS that the multiselect search uses, When the token changes value, the multiselect search will re-run. The MS would h... See more...
If you have JS that is creating the new POA&M, then you could set a token in the JS that the multiselect search uses, When the token changes value, the multiselect search will re-run. The MS would have something like | inputlookup sp6_poams ``` $ms_trigger_token$ ``` i.e. it just needs to be in comments in the SPL and in your JS would do something like var defTokens = mvc.Components.get('default'); var subTokens = mvc.Components.get('submitted'); var value = defTokens.get('ms_trigger_token') + 1; defTokens.set('ms_trigger_token', value); subTokens.set('ms_trigger_token', value); Untested, but it's easy enough to do directly in XML but with JS you just need to get and increment the current token value and set it back - not sure exactly which of the default and/or submitted token models needs updating, but doesn't hurt to do both.
FWIW, it's usually better to ask a new question than to pile on to a 4-year-old thread. To keep only the BAD events, try one of these index=my_index | eval my_check=if(my_field>100,"NOK","OK") | wh... See more...
FWIW, it's usually better to ask a new question than to pile on to a 4-year-old thread. To keep only the BAD events, try one of these index=my_index | eval my_check=if(my_field>100,"NOK","OK") | where my_check="NOK" | table _time my_check or index=my_index | where my_field>100 | table _time my_field    
I have a question I Did this on an event and basically did the If command - that if above 15 mins then Output is BAD and if under 15 the. output is GOOD - This works. My question is I now want to s... See more...
I have a question I Did this on an event and basically did the If command - that if above 15 mins then Output is BAD and if under 15 the. output is GOOD - This works. My question is I now want to search only the BAD and alert - so guess how do I start another search after I have run eval and got my BAD output?
Hi there, make sure the user role has the required capabilities: edit_tokens_settings, which turns token authentication on or off edit_tokens_all, which lets you create, view, and manage token... See more...
Hi there, make sure the user role has the required capabilities: edit_tokens_settings, which turns token authentication on or off edit_tokens_all, which lets you create, view, and manage tokens for any user on the instance edit_tokens_own, which lets you create, view, and manage tokens for yourself https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Security/Setupauthenticationwithtokens#Prerequisites_for_creating_and_configuring_tokens   Hope this helps ... cheers, MuS