Reporting

After search Splunk head migration VM to Physical server we are facing the below issue

Sathish28
Explorer

Recently we migrated Splunk search head from VM to physical machine. 
Splunk ES Version: 9.0.3

In Splunkd.log We could the below error and warnings


ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached.
ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role.


WARN DispatchManager [3404833 SchedulerThread] - Search not executed: reason="The maximum number of concurrent real-time searches for this user based on their role quota has been reached." user=****** currenct_concurrency=6 concurrency_limit=6, search_id= "**************"
WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup.


Labels (3)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Sathish28 ,

probably there's a little error in your question: the last version of ES is 8.x, there's isn't any 9.x version (for now), probably 9.0.3 is the Splunk Enterprise version.

Then, did you checked the resources on the physical machine?

at first if they are sufficient and anyway, if they are different, you have to chenge some configuration in Splunk e.g. the number of concurrent searches.

Ciao.

Giuseppe

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @Sathish28 

It looks like you have a couple of issues here, however its unlikely that this is purely as a result of moving from VM to Physical infrastructure.

The first issue looks to be search concurrency - It seems that one or more users are hitting the limits on the number of searches being run. It worth investigating in the _audit logs to see which searches are being queued (or even skipped) as this will easily indicate which user is impacted. You can then work out whether it is appropriate to increase the concurrency limits for that user/role/system or if the searches need refining to be more efficient.

The other issue relating to sendemail (https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Sendemail) is purely down to a missing capability on the user running that search. The user needs to have run_custom_command in their role so that this command will work. Again, have a look in _audit for "sendemail" to see which user(s) are calling this search if you are unsure, and then adjust their role accordingly.

Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped.
Regards

Will

 

0 Karma

kiran_panchavat
Champion

@Sathish28 

ERROR:

WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup.

Have a look for the search id <search_id= "**************"> in _internal for the time just before that message, the message you’re quoting here show that something went wrong before that already. Might be the issue is related to disk same limit reached for the specific user.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
Champion

@Sathish28 

Check this, if you have this capability or not. 

kiran_panchavat_0-1740052866868.png

 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

Sathish28
Explorer

Sathish28_0-1740053045675.png

for admin role that is enabled
should I check all the other role ?

0 Karma

kiran_panchavat
Champion

@Sathish28 Are you logged in as the admin user or a different user? Also, which role has been assigned to your account? 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

Sathish28
Explorer

I have logged in as admin User

0 Karma

kiran_panchavat
Champion

@Sathish28 

You can send email notifications directly using the **sendemail** search command. Here's an example, please check if you receive the email. Replace the values with your details.

index=_internal | head 5 | sendemail to=example@splunk.com server=mail.example.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true

sendemail - Splunk Documentation

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

Sathish28
Explorer

I changed the details to my email id, but I didn't receive any email after running the below search query in Search Head 

0 Karma

kiran_panchavat
Champion

@Sathish28 

  • Make sure you are using the correct parameters in your sendemail command.
  • Ensure that the SMTP server details are correctly configured in Splunk. Go to Settings > Server settings > Email settings and verify the SMTP server, port, and authentication details.
  • Look for any errors in the Splunk logs that might give you more information about why the email wasn't sent.
  • Ensure that there are no firewall or network issues blocking the connection to the SMTP server
Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
Champion

@Sathish28 

ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role.

. This error indicates that your current role does not have the run_custom_command capability required to execute the sendemail command.

To resolve this, you should contact your Splunk administrator and request that they add the run_custom_command capability to your role. This will allow you to use the sendemail command without encountering this error

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
Champion

@Sathish28 

This indicates that the user has reached the maximum number of allowed concurrent real-time searches.


ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached.

The maximum amount of concurrent searches that can be run system-wide is determined by a setting in
https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Limitsconf 

[search]
max_searches_per_cpu = <int>
* the maximum number of concurrent searches per CPU. The system-wide number of searches
* is computed as max_searches_per_cpu x number_of_cpus + 2
* Defaults to 2


You can increase this value in order to raise your system-wide concurrent search quota.

But since you are not hitting the limit as admin, you likely have to increase your regular user's concurrent search quota. In

https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/authorizeconf 


srchJobsQuota = <number>
* Maximum number of concurrently running historical searches a member of this role can have (excludes real-time searches, see rtSrchJobsQuota)

and possibly

rtSrchJobsQuota = <number>
* Maximum number of concurrently running real-time searches a member of this role can have

for the appropriate roles.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma
Get Updates on the Splunk Community!

.conf25 Community Recap

Hello Splunkers, And just like that, .conf25 is in the books! What an incredible few days — full of learning, ...

Splunk App Developers | .conf25 Recap & What’s Next

If you stopped by the Builder Bar at .conf25 this year, thank you! The retro tech beer garden vibes were ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...