All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so i... See more...
Our team looks after 7 applications, we have 5 environments and each application sits on between 2 and 4 servers, depending on the environment. Each app instance has its own dedicated server, so in other words, given a hostname, you can figure out exactly which application and which environment it is for.   At the moment, if we want to search for the logs of one of the applications (app1) in UAT, and if this app has 4 servers in UAT, the only way we can do this is by using the following search parameters   source=*app1.log host=host1 OR host=host2 OR host=host3 OR host=host4   Sometimes we have a few different applications talking to each other, so we end up having to mention a long list of host names and this gets quite tedious.  We have a separate team that manages Splunk across the organisation.   Is there something we could be asking the Splunk team to do for us to make our searching easier? Is there something they could do that would result in us being able to do something like application=app1 environment=uat    instead of having to specify host names for the environment that we are interested in?   Our team would appreciate any suggestions that can make our work easier.   Thank you    
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue... See more...
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue while creating any chart that contains several objects in it. If I want to see dtu_consumption_percent for SQL databases on my SQL Server (tens of databases) I can easily create a time chart, and it contains data for all databases, but I do not understand what line represents what database. Because the name of any database looks like /SUBSCRIPTIONS/FULL_ID_OF_SUBSCRIPRTION/RESOURCEGROUPS/RESOURCE_GROUP_NAME/PROVIDERS/MICROSOFT.SQL/SERVERS/THE_NAME_OF_SQL_SERVER/DATABASES/THE_NAME_OF_DATABASE I do not see this full name even on legend. And hovering the mouse over the line shows only a small part, like  "/SUBSCRIPTIONS/FULL_ID_OF_SU"   I would like to have THE_NAME_OF_DATABASE aka resource name instead of full azure_resource_id.   Is it possible? Thank You
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint ... See more...
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint call to get correlation search information. Both return the same results for me.
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API ... See more...
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API calls?
Hello, Please help me with the following request I have a table output of several url's to PDF files For example this one https://janenpiet.domain.nl/sites/conten_sites/BOT/Parts%20%20documents/A... See more...
Hello, Please help me with the following request I have a table output of several url's to PDF files For example this one https://janenpiet.domain.nl/sites/conten_sites/BOT/Parts%20%20documents/ADFM/Phonesrv/Highg%20Volume%20Phiones/ABCD%20Capture/60.%20Hardware/01.%20Scanners/Amsterdam/30/2025-01-20.pdf My request is as follows. When a user click on a link, it should open a new window that opens the PDF file. I notice that a simple drilldown does not work. Does anybody know how to do this? Reagrds, Harry
Looking at the Cisco ISE App/Add_On The logging level is by default set to debug I cannot find a file which shows me debug logs for this TA? /var/log/splunk does not have any specific file for ISE... See more...
Looking at the Cisco ISE App/Add_On The logging level is by default set to debug I cannot find a file which shows me debug logs for this TA? /var/log/splunk does not have any specific file for ISE and in /splunk/etc/apps/Splunk_TA as well, there is no file for logs? Thanks!    
Hello everyone, I successfully installed and started the smart agent on the host machine where I have a Java agent and machine agent installed. As you can see in the first image below, the processes... See more...
Hello everyone, I successfully installed and started the smart agent on the host machine where I have a Java agent and machine agent installed. As you can see in the first image below, the processes show that the Java agent is being monitored (same PID as the Java agent process), but in the Appserver agent section, it shows that the same Java agent does not seem to be managed by the smart agent. I have restarted the smart agent service a couple of times and also restarted the Java agent after any edits I made to the smart agent configuration. I tried to enable auto-attach and fixed the ld_preload.json file. I also tried to enable and disable auto-discovery. Note that I am running my agent as a service on an Ubuntu Linux machine that's compatible with the agent and I have no errors in the smart agent logs or the Java agent logs. However, nothing seems to be working. Can someone please assist me with this issue? Thank you. 
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots ... See more...
Hi everyone. I'm sorry if this seems like a questions that's already been asked, but none of the answers I could find solve my problem and I'm very new to splunk. I have a query that does lots of filtering and calculates multiple metrics such as average, max, count on conditions and etc. I used to run this query twice, creating two different tables, as I need to compare two different applications based on the same metrics. But now I need to do this using only one table.  My query is of the form  index=... payload.appName=app1 | bin span=1d _time | stats ... | eval ... | where ... | sort ... | streamstats ... | eval ... | stats avg(...) as avg_app1 max(...) as max_app1 count(...) as count1_app1 count(...) as count2_app1 by _time | rename avg_app1 as "Average App 1" ... | fields "Average App 1" ... This would give a table with all my metrics for app1 and I would have, simultaneously, another similar query for the app2, resulting in a different table.  I need to create a single table of the form: "Average App 1" | "Average App 2" | "Max App 1" | "Max App 2" | "Count App 1"... It's important to note that using, for example, "multisearch", gives me the error "Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command)". How could I do this? Thank you in advance
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerTh... See more...
Recently we migrated Splunk search head from VM to physical machine.  Splunk ES Version: 9.0.3 In Splunkd.log We could the below error and warnings ERROR SearchScheduler [3404833 SchedulerThread] - The maximum number of concurrent real-time searches for this user based on their role quota has been reached. ERROR script [3404833 SchedulerThread] - Error in 'sendemail' command: You do not have a role with the capability='run_custom_command' required to run this command='sendemail'. Contact your Splunk administrator to request that this capability be added to your role. WARN DispatchManager [3404833 SchedulerThread] - Search not executed: reason="The maximum number of concurrent real-time searches for this user based on their role quota has been reached." user=****** currenct_concurrency=6 concurrency_limit=6, search_id= "**************" WARN DispatchManager [3404833 SchedulerThread] - Failed to start search for sid="***************". Dropping failedtostart token at path=/apps/splunk/splunk/var/run/splunk/dispatch/********************** to expedite dispatch cleanup.
Hello everyone, I am currently working on creating a Splunk SOAR playbook that collects variables from a case and appends them to a Splunk Lookup file (CSV). Unfortunately, I have not been able to f... See more...
Hello everyone, I am currently working on creating a Splunk SOAR playbook that collects variables from a case and appends them to a Splunk Lookup file (CSV). Unfortunately, I have not been able to find any resources on this topic. Has anyone had experience with this or can provide guidance? Thank you for your support !
Hello all, Actually i have been using rest command  | rest /servicesNS/-/MYAPP/saved/searches | table title to call my saved searches and now we created a dashboard to updated this saved searc... See more...
Hello all, Actually i have been using rest command  | rest /servicesNS/-/MYAPP/saved/searches | table title to call my saved searches and now we created a dashboard to updated this saved search manually, but the scenario is to paste the entire query dynamically and run it, when the saved is updated in backend the same should be reflecting here. Thanks in advance and best regards.
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Productio... See more...
I am using the following query to display a result on a dashboard (query with sample data which resembles the data I use):   | makeresults | eval zip="Test-10264,Production;Test-10262,Production;Test-102123,Production;MGM-1,Development;MGM-2,Development;MGM-3,Development;MGM-4,Development" | makemv delim=";" zip | mvexpand zip | table zip _time ```End of sample data``` | rex field=zip "(?<ticket>.+?),(?<Status>.+$)" | stats values(ticket) as tickets by Status | stats count(tickets) as amount by Status   And this is being returned by visualization:   The issue I'm facing is both columns have the same color, but I want to each column to have its own unique color (this doesn't have to be predefined, it would be okay if Splunk itself chooses random colors).  Thanks in advance!   Edit: typo  
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. S... See more...
Hi Splunkers, I have some doubt about SSL use for S2S communication. First, let us remark what is sure, with no doubts: 1. SSL provide a compression ratio better than default one: 1:8 vs 1:2. 2. SSL Compression does NOT affect license. In general, ALL Compression on Splunk does not affect license. This means that: if before compression data has dimension X + Y and, after it, X, consumed license will be X + Y, not X. 3. From a security perspective, if I have multiple Splunk components, the best way to configure flows should be encrypt all of them. For example if I have UF -> HF -> IDX, for security purpose the best is to encrypt both UF -> HF flow and HF -> IDX one. Now, for a customer we have the following data flow: Log sources -> Intermediate Forwarder -> Heavy forwarders -> Indexers I know that when possible we should avoid HF and IF but, for different reason, we need them on this particular environment. Here, 2 doubt rise: Suppose we apply SSL only between IF and HF. 1. Data arrive compressed on HF. When they leaves it and goes to IDXs, they are still compressed? So, for example suppose we have original data with a total dimension of 800 MB: Between IF and HF exist SSL, so in HF there is a tcp-ssl input on port 9997 SSL compression is applied: now data have 100 MB dimension When they arrive to HF, they have 100 MB dimension When they leave the HF to go on IDXs, they still have 100 MB dimension? Suppose now we apply SSL on entire S2S data flow: between IF and HF and between HF and IDXs. In addition to a better security posture, which other advantage we should achieve going in this direction?    
OK so we have 2 search heads and we want to migrate enterprise security app from 1 search head to another . How should we do that step by step so that we don't face any issues.
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving ale... See more...
"Hello Team, I have created a Maintenance Window in Splunk ITSI to suppress alerts from certain correlation searches. However, despite the Maintenance Window being active, we are still receiving alerts. What could be the possible reasons for this, and how can we troubleshoot and resolve the issue?"
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours... See more...
right now we have a online soultion in the central observability otel collector send to splunk   now im looking for the offline solution where internet connectivity may be lost for 14 hours+ on-premises cluster should send to both the FDR(factory data room) and central observability i need a solution like the above for offline how it can be achived for offline when there no internet for recent time?  
Hi All, I want to download a search result as csv file into my local folder. Can anyone suggest me some good methods to do it and how can i do it? I saw some examples using curl command and rest ap... See more...
Hi All, I want to download a search result as csv file into my local folder. Can anyone suggest me some good methods to do it and how can i do it? I saw some examples using curl command and rest api, but couldn't able to understand that fully. can anyone help me in this?
1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? 2. Company has 2 different instances of Splunk. Will the da... See more...
1. For a user to use Splunk support portal, should the user be granted access to the support portal? Don't they get the access inherently? 2. Company has 2 different instances of Splunk. Will the dashboard created in one be visible in another as well? Are the 2 instances independent of each other? Can you paint a picture for me, how they'd be related? 3. In order for me to know the answers to these questions, what concepts/topics should I know well?
my app has been archived , i uploaded a new version today "Compatibility Report" passed all checks  but it didn't help the app is still in "archived" mode. any idea what i need to do to solve it an... See more...
my app has been archived , i uploaded a new version today "Compatibility Report" passed all checks  but it didn't help the app is still in "archived" mode. any idea what i need to do to solve it and to be able to activate it again? Thanks
Our Security partners at work recently determined that their analyst need the ability to run the custom command:  advhunt (TA_ms_advanced_hunting) The custom command indicates: To use this a... See more...
Our Security partners at work recently determined that their analyst need the ability to run the custom command:  advhunt (TA_ms_advanced_hunting) The custom command indicates: To use this app, users need following privileges. list_storage_passwords admin_all_objects We do not want to give all Security users the ability to admin_all_objects. What other options do we have?