All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your response! It definitely works, but it has two issues, both related to each other: 1) It gives two row, instead of adding columns. Running the query for only one app, gives me o... See more...
Thank you for your response! It definitely works, but it has two issues, both related to each other: 1) It gives two row, instead of adding columns. Running the query for only one app, gives me one row per date, in which the metrics are the columns. Using your strategy gives me two rows per date, one for each app. Is it possible to set the metrics side-by-side as different columns? Having two rows is a problem as, in theory, the user should be able to compare both applications for a range of, let's say, 1 month. So having two rows per day and making the user compare between pairs of rows while in the middle of other 58 makes it confusing. 2) Related to the last topic, how would I rename them? Because as this strategy gives me two rows, the distinction between the apps is based on identifying which row corresponds to each app. Would it be possible to rename the metrics in such way that I have "Average App 1" and "Average App 2"?
Search for both apps at the same time and let the stats commands sort them out. index=... (payload.appName=app1 OR payload.appName=app2) | bin span=1d _time | rename payload.appName as appName | sta... See more...
Search for both apps at the same time and let the stats commands sort them out. index=... (payload.appName=app1 OR payload.appName=app2) | bin span=1d _time | rename payload.appName as appName | stats ... by appName | eval ... | where ... | sort ... | streamstats ... by appName | eval ... | stats avg(...) as avg_app1 max(...) as max_app1 count(...) as count1_app1 count(...) as count2_app1 by _time appName | rename avg_app1 as "Average App 1" ... | fields "Average App 1" ...
Hello @woodcock  A question, in your post I see that you talk about some scripts, I wanted to know if those allow you to stop the error=3 when you run a search and it doesn't return any results Cur... See more...
Hello @woodcock  A question, in your post I see that you talk about some scripts, I wanted to know if those allow you to stop the error=3 when you run a search and it doesn't return any results Currently I need to run correlations search with the command at the end "... | sendalert risk ... " But when there are no results it throws that error and the whole correlation search is truncated, and for that reason I wanted to know if there is a way to abort the sendalert risk when there are no results 
Hi @SN1  Check out https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-move-Enterprise-Security-to-new-search-head/m-p/460898 (which I believe @kiran_panchavat has posted some snippets... See more...
Hi @SN1  Check out https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-move-Enterprise-Security-to-new-search-head/m-p/460898 (which I believe @kiran_panchavat has posted some snippets from below) as this has more info. To be clear though - it is not as simple as just moving the "SplunkEnterpriseSecuritySuite" app - depending on your setup there will be multiple apps (such as SA-* and TA-* apps) which support the ES app.  Aside from the apps, there are also KV Stores which you will need to backup and restore / migrate to the new SH.  Question - Is the new SH going to replace the old SH? Are there any users/configuration on the new SH already? If the new SH is a blank replacement then you might be okay to copy all the $SPLUNK_HOME/etc/apps content over, along with a KVStore backup and restore from the Old to the New SH. As mentioned previously, it would be worth testing this in a development environment - if you have one! I know that not everyone has the luxury! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @SplunkExplorer  There is a really good post (https://community.splunk.com/t5/Getting-Data-In/Forwarder-Output-Compression-Ratio-what-is-the-expected/td-p/69899) which has some stats on various c... See more...
Hi @SplunkExplorer  There is a really good post (https://community.splunk.com/t5/Getting-Data-In/Forwarder-Output-Compression-Ratio-what-is-the-expected/td-p/69899) which has some stats on various compression rates which might help, but I think to answer your question - Its good practice to have end-to-end SSL encryption, and SSL Compression can also reduce your networking costs. In terms of "if before compression data has dimension X + Y and, after it, X, consumed license will be X + Y, not X." - Im not really sure I understood this, but as you said, compression does not impact license. Therefore if you have 800mb of data at source, which is compressed to 100mb and sent to the destination indexers, then the amount of license you use is 800mb, regardless of if it arrives compressed. Licensing is based on raw ingestion size (unless you have workload based licensing!) Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @TallBear  The easiest way to achieve this is to create multiple series like this, and then change to stacked bar chart:   | makeresults | eval zip="Test-10264,Production;Test-10262,Production;... See more...
Hi @TallBear  The easiest way to achieve this is to create multiple series like this, and then change to stacked bar chart:   | makeresults | eval zip="Test-10264,Production;Test-10262,Production;Test-102123,Production;MGM-1,Development;MGM-2,Development;MGM-3,Development;MGM-4,Development" | makemv delim=";" zip | mvexpand zip | table zip _time ```End of sample data``` | rex field=zip "(?<ticket>.+?),(?<Status>.+$)" | stats values(ticket) as tickets by Status | stats count(tickets) as amount by Status ``` Add the SPL below ``` | eval {Status}=amount | fields - status amount   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @siva_kumar0147  If you have created a dashboard based on the REST call then whenever you load the dashboard it will show the current details of your saved searches, you shouldnt need to make any... See more...
Hi @siva_kumar0147  If you have created a dashboard based on the REST call then whenever you load the dashboard it will show the current details of your saved searches, you shouldnt need to make any updates to the dashboard when your saved searches are updated elsewhere. I hope I have understood your question, but please let me know if I can help further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Sathish28  It looks like you have a couple of issues here, however its unlikely that this is purely as a result of moving from VM to Physical infrastructure. The first issue looks to be search ... See more...
Hi @Sathish28  It looks like you have a couple of issues here, however its unlikely that this is purely as a result of moving from VM to Physical infrastructure. The first issue looks to be search concurrency - It seems that one or more users are hitting the limits on the number of searches being run. It worth investigating in the _audit logs to see which searches are being queued (or even skipped) as this will easily indicate which user is impacted. You can then work out whether it is appropriate to increase the concurrency limits for that user/role/system or if the searches need refining to be more efficient. The other issue relating to sendemail (https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Sendemail) is purely down to a missing capability on the user running that search. The user needs to have run_custom_command in their role so that this command will work. Again, have a look in _audit for "sendemail" to see which user(s) are calling this search if you are unsure, and then adjust their role accordingly. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
@Sathish28  Make sure you are using the correct parameters in your sendemail command. Ensure that the SMTP server details are correctly configured in Splunk. Go to Settings > Server settings > Ema... See more...
@Sathish28  Make sure you are using the correct parameters in your sendemail command. Ensure that the SMTP server details are correctly configured in Splunk. Go to Settings > Server settings > Email settings and verify the SMTP server, port, and authentication details. Look for any errors in the Splunk logs that might give you more information about why the email wasn't sent. Ensure that there are no firewall or network issues blocking the connection to the SMTP server
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots ... See more...
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots of filtering and calculates multiple metrics such as average, max, count on conditions and etc. I used to run this query twice, creating two different tables, as I need to compare two different applications based on the same metrics. But now I need to do this using only one table.  My query is of the form  index=... payload.appName=app1 | bin span=1d _time | stats ... | eval ... | where ... | sort ... | streamstats ... | eval ... | stats avg(...) as avg_app1 max(...) as max_app1 count(...) as count1_app1 count(...) as count2_app1 by _time | rename avg_app1 as "Average App 1" ... | fields "Average App 1" ... This would give a table with all my metrics for app1 and I would have, simultaneously, another similar query for the app2, resulting in a different table.  I need to create a single table of the form: "Average App 1" | "Average App 2" | "Max App 1" | "Max App 2" | "Count App 1"... It's important to note that using, for example, "multisearch", gives me the error "Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command)". How could I do this? Thank you in advance
I changed the details to my email id, but I didn't receive any email after running the below search query in Search Head 
@Sathish28  You can send email notifications directly using the **sendemail** search command. Here's an example, please check if you receive the email. Replace the values with your details. index=_... See more...
@Sathish28  You can send email notifications directly using the **sendemail** search command. Here's an example, please check if you receive the email. Replace the values with your details. index=_internal | head 5 | sendemail to=example@splunk.com server=mail.example.com subject="Here is an email from Splunk" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true sendemail - Splunk Documentation
I have logged in as admin User
@Sathish28 Are you logged in as the admin user or a different user? Also, which role has been assigned to your account? 
@Sathish28  ERROR: WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatc... See more...
@Sathish28  ERROR: WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup. Have a look for the search id <search_id= "**************"> in _internal for the time just before that message, the message you’re quoting here show that something went wrong before that already. Might be the issue is related to disk same limit reached for the specific user.
for admin role that is enabled should I check all the other role ?
@Sathish28  Check this, if you have this capability or not.   
@Sathish28  ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact y... See more...
@Sathish28  ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role. . This error indicates that your current role does not have the run_custom_command capability required to execute the sendemail command. To resolve this, you should contact your Splunk administrator and request that they add the run_custom_command capability to your role. This will allow you to use the sendemail command without encountering this error
@Sathish28  This indicates that the user has reached the maximum number of allowed concurrent real-time searches. ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurren... See more...
@Sathish28  This indicates that the user has reached the maximum number of allowed concurrent real-time searches. ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached. The maximum amount of concurrent searches that can be run system-wide is determined by a setting in https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Limitsconf  [search] max_searches_per_cpu = <int> * the maximum number of concurrent searches per CPU. The system-wide number of searches * is computed as max_searches_per_cpu x number_of_cpus + 2 * Defaults to 2 You can increase this value in order to raise your system-wide concurrent search quota. But since you are not hitting the limit as admin, you likely have to increase your regular user's concurrent search quota. In https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/authorizeconf  srchJobsQuota = <number> * Maximum number of concurrently running historical searches a member of this role can have (excludes real-time searches, see rtSrchJobsQuota) and possibly rtSrchJobsQuota = <number> * Maximum number of concurrently running real-time searches a member of this role can have for the appropriate roles.
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerTh... See more...
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached. ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role. WARN DispatchManager [3404833 SchedulerThread] - Search not executed: reason="The maximum number of concurrent real-time searches for this user based on their role quota has been reached." user=****** currenct_concurrency=6 concurrency_limit=6, search_id= "**************" WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup.