All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Sathish28  Make sure you are using the correct parameters in your sendemail command. Ensure that the SMTP server details are correctly configured in Splunk. Go to Settings > Server settings > Ema... See more...
@Sathish28  Make sure you are using the correct parameters in your sendemail command. Ensure that the SMTP server details are correctly configured in Splunk. Go to Settings > Server settings > Email settings and verify the SMTP server, port, and authentication details. Look for any errors in the Splunk logs that might give you more information about why the email wasn't sent. Ensure that there are no firewall or network issues blocking the connection to the SMTP server
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots ... See more...
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots of filtering and calculates multiple metrics such as average, max, count on conditions and etc. I used to run this query twice, creating two different tables, as I need to compare two different applications based on the same metrics. But now I need to do this using only one table.  My query is of the form  index=... payload.appName=app1 | bin span=1d _time | stats ... | eval ... | where ... | sort ... | streamstats ... | eval ... | stats avg(...) as avg_app1 max(...) as max_app1 count(...) as count1_app1 count(...) as count2_app1 by _time | rename avg_app1 as "Average App 1" ... | fields "Average App 1" ... This would give a table with all my metrics for app1 and I would have, simultaneously, another similar query for the app2, resulting in a different table.  I need to create a single table of the form: "Average App 1" | "Average App 2" | "Max App 1" | "Max App 2" | "Count App 1"... It's important to note that using, for example, "multisearch", gives me the error "Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command)". How could I do this? Thank you in advance
I changed the details to my email id, but I didn't receive any email after running the below search query in Search Head 
@Sathish28  You can send email notifications directly using the **sendemail** search command. Here's an example, please check if you receive the email. Replace the values with your details. index=_... See more...
@Sathish28  You can send email notifications directly using the **sendemail** search command. Here's an example, please check if you receive the email. Replace the values with your details. index=_internal | head 5 | sendemail to=example@splunk.com server=mail.example.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true sendemail - Splunk Documentation
I have logged in as admin User
@Sathish28 Are you logged in as the admin user or a different user? Also, which role has been assigned to your account? 
@Sathish28  ERROR: WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatc... See more...
@Sathish28  ERROR: WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup. Have a look for the search id <search_id= "**************"> in _internal for the time just before that message, the message you’re quoting here show that something went wrong before that already. Might be the issue is related to disk same limit reached for the specific user.
for admin role that is enabled should I check all the other role ?
@Sathish28  Check this, if you have this capability or not.   
@Sathish28  ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact y... See more...
@Sathish28  ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role. . This error indicates that your current role does not have the run_custom_command capability required to execute the sendemail command. To resolve this, you should contact your Splunk administrator and request that they add the run_custom_command capability to your role. This will allow you to use the sendemail command without encountering this error
@Sathish28  This indicates that the user has reached the maximum number of allowed concurrent real-time searches. ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurren... See more...
@Sathish28  This indicates that the user has reached the maximum number of allowed concurrent real-time searches. ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached. The maximum amount of concurrent searches that can be run system-wide is determined by a setting in https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Limitsconf  [search] max_searches_per_cpu = <int> * the maximum number of concurrent searches per CPU. The system-wide number of searches * is computed as max_searches_per_cpu x number_of_cpus + 2 * Defaults to 2 You can increase this value in order to raise your system-wide concurrent search quota. But since you are not hitting the limit as admin, you likely have to increase your regular user's concurrent search quota. In https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/authorizeconf  srchJobsQuota = <number> * Maximum number of concurrently running historical searches a member of this role can have (excludes real-time searches, see rtSrchJobsQuota) and possibly rtSrchJobsQuota = <number> * Maximum number of concurrently running real-time searches a member of this role can have for the appropriate roles.
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerTh... See more...
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached. ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role. WARN DispatchManager [3404833 SchedulerThread] - Search not executed: reason="The maximum number of concurrent real-time searches for this user based on their role quota has been reached." user=****** currenct_concurrency=6 concurrency_limit=6, search_id= "**************" WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup.
@SN1  If you're migrating for the first time, I recommend testing the process in a test environment before applying it in production.
THIS is what worked for me   I did add the selective indexing stanz before but that alone was not enough   thx so much, I would have never EVER guessed to add that there 
@SN1  To move the apps from one server to another, I recommend using WinSCP or SCP and following the steps I mentioned above.
so I just have to paste enterprise security app folder ($SPLUNK_HOME/etc/apps)  from old to new sh?  
Hi Will, thanks for the hints. I didn't create a modular input, just a simple Data Inputs > Script in the Web UI, so when I try to run the command you suggested, Splunk says that "Scheme 'script' is... See more...
Hi Will, thanks for the hints. I didn't create a modular input, just a simple Data Inputs > Script in the Web UI, so when I try to run the command you suggested, Splunk says that "Scheme 'script' is not inizialized" (I used 'script' as scheme and script:///opt/splunk/etc/apps/adsmart_summary/bin/getCampaignData.py as stanza name as written in inputs.conf). I think it's the normal behaviour. In metrics.log I found that at some point Splunk got some events from my script, but anything has been written in the index. As I wrote in the other post, my supects are about avg_age and max_age that have negative values: 02-19-2025 10:49:29.584 +0100 INFO Metrics - group=per_source_thruput, series="/opt/splunk/etc/apps/adsmart_summary/bin/getcampaigndata.py", kbps=0.436, eps=0.677, kb=13.525, ev=21, avg_age=-3600.000, max_age=-3600 host = splunkidx01 source = /opt/splunk/var/log/splunk/metrics.log sourcetype = splunkd   Maybe there something about the timestamp of the events, I am still there trying to figure it out. Thanks!
@anooshac  1) Create the saved search. 2) Create a python script to call the saved search you created. Then, save the results in csv in the directory you want. 3) Schedule the python script as a s... See more...
@anooshac  1) Create the saved search. 2) Create a python script to call the saved search you created. Then, save the results in csv in the directory you want. 3) Schedule the python script as a script input in Splunk.  
Hello everyone, I am currently working on creating a Splunk SOAR playbook that collects variables from a case and appends them to a Splunk Lookup file (CSV). Unfortunately, I have not been able to f... See more...
Hello everyone, I am currently working on creating a Splunk SOAR playbook that collects variables from a case and appends them to a Splunk Lookup file (CSV). Unfortunately, I have not been able to find any resources on this topic. Has anyone had experience with this or can provide guidance? Thank you for your support !
It is not clear what your issue is here, nor what exactly you are doing and what is perhaps not working for you. Please can you provide more details and examples of what you are doing and the the res... See more...
It is not clear what your issue is here, nor what exactly you are doing and what is perhaps not working for you. Please can you provide more details and examples of what you are doing and the the results you are getting and explain why it is not what you expected/wanted.