All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to make a dashboard that will show only specific error? Ex The dashboard will be plain with general business transaction metrics, but when an error occurs all of the graphs will focus... See more...
Is it possible to make a dashboard that will show only specific error? Ex The dashboard will be plain with general business transaction metrics, but when an error occurs all of the graphs will focus on that error. Like all other business transactions will fade away and only the business transaction in error will remain.
Attempting to configure modular inputs and not quiet understanding number 8. We have a HEC token already established for ingestion in the instance. Do I need to created another token to configure the... See more...
Attempting to configure modular inputs and not quiet understanding number 8. We have a HEC token already established for ingestion in the instance. Do I need to created another token to configure the in app? I've created local splunk_ta_setting.conf and set parameter enable_ssl =0 to configure the servers. Selected the templates      Go to Inputs. The Manage F5 Inputs page appears listing all inputs defined in the add-on. Click Create New Input to create a new input. Provide an input Name. Acceptable characters are a-z, A-Z, 0-9 or "_". (optional) Enter a Description for the input. Click Servers to select one or more servers from which you want to collect data. Click Templates to select one or more templates that describe the data you want to collect. Provide Polling Interval (in seconds) to set the data collection for the input. The add-on, by default, collects data from F5 servers for each input every 300 seconds. The interval setting determines the granularity of the data returned. The more often you collect data, the more detail you see from your data. If you specified a data collection interval when you configured your servers, that interval setting overrides the interval setting at the input level. Enter a HEC Token name to collect the data for the configured templates. For more information, see Creating a HEC Token. The user needs to make sure that the HEC Token is created in the Splunk_TA_f5 context. For that, the user will have to navigate to the Settings > Data Inputs from the Splunk_TA_f5 add-on. Also, you must disable the SSL check from Global Settings for the HEC Token to perform the Data Collection. Enter the Splunk Host to collect the data for a particular Splunk Instance. Click Add to create the input. The Splunk add-on for F5 BIG-IP creates the input, adds it to the list of scheduled inputs, and enables it by default. To disable the input at any time, click Disabled in the row for that input.  While editing the inputs I get an Unexpected error  from python handler: "Stanza: Standard_System, Standard_Network, Standard_LocalLB does not exist in f5_templates_ts.conf"  I've created f5_templates_ts.conf and added the stanzas for templates.
Hi All, I need to connect a new indexer cluster which are in GCP to an existing splunk SHC. I read the below document. Integrate the search head cluster with an indexer cluster - Splunk Documen... See more...
Hi All, I need to connect a new indexer cluster which are in GCP to an existing splunk SHC. I read the below document. Integrate the search head cluster with an indexer cluster - Splunk Documentation Integrate with a single-site indexer cluster Do I need to execute on all the SHC and then do a rolling restart? OR I need to execute on one SH, perform the restart and then follow the same on other SH? also, do I need to start with captain or non-captain? There is one more way, via GUI part: Enable the search head - Splunk Documentation It didn't mention whether I need to apply this on only one SH which is in cluster or on all the SH. can anyone help me with this? Thanks. 
On occasion my panels on my dashboard studio are blank. When I hover over the bottom right the message is "Search ran successfully, but no results were returned". I understand that but to make the pa... See more...
On occasion my panels on my dashboard studio are blank. When I hover over the bottom right the message is "Search ran successfully, but no results were returned". I understand that but to make the panels more visually appealing I want to put that message or a different message in the middle of the panel so the user better understands what is going on. Any way to edit this through the JSON source code?
Hello, thank you in advance for your time. I need to perform the sum of similar fields that results in a chart. My current query is the following:       ...| chart count by path, auditJsonM... See more...
Hello, thank you in advance for your time. I need to perform the sum of similar fields that results in a chart. My current query is the following:       ...| chart count by path, auditJsonMessage.Code | rename path as "EndPoint" | addtotals | rename Total as "Total Calls"       As a result it gives the following.  But what I want is for all the "EndPoints" that are even to be added and show a single row with the http code total.  
I have this table in my dashboard studio that shows log event count by country. I want to be able to click on a row and a new window pop up in splunk showing that country and all the log events for t... See more...
I have this table in my dashboard studio that shows log event count by country. I want to be able to click on a row and a new window pop up in splunk showing that country and all the log events for the last 24 hours for that specific country. Is this possible? I have tried adding a drilldown in the panel and adding an internal URL to a splunk search based on a token but I can't specifically click on a single row.
I created a new role in Splunk, let's say "RoleA". I want RoleA to be able to see a list of all users and see all related information (like name, email, roles assigned etc.). However I don't want to ... See more...
I created a new role in Splunk, let's say "RoleA". I want RoleA to be able to see a list of all users and see all related information (like name, email, roles assigned etc.). However I don't want to allow RoleA to edit those users, or create or delete them.  When I remove 'edit_user' capability from RoleA, I can achieve the latter - it cannot create/edit/delete the users. However with that capability not assigned to RoleA, it can't see a list of all users. Is there a way I can achieve both?
The data is in key value format instead of field value due to limitation of fields to be used. There are 10+ key value for each host. I'm looking to convert these key value data into fields so that... See more...
The data is in key value format instead of field value due to limitation of fields to be used. There are 10+ key value for each host. I'm looking to convert these key value data into fields so that i can apply the drop down filter as input and search result get updated as per the field selected in the drop down input. Here is an example of how my data is appearing in splunk.   index=this_is_demo source="demo-kv" script=demo.ps1 hostname IN (*) hostname - server1.com IP - 128.xx.xx.xx group - key - username value - john script -demo.ps1 hostname - server1.com IP - 128.xx.xx.xx group - key - hardware manufacturer value - HP script -demo.ps1 hostname - server1.com IP - 128.xx.xx.xx group - key - Puppet status value - active script -demo.ps1 hostname - server1.com IP - 128.xx.xx.xx group - key - CPU count value - 12 script -demo.ps1 Appreciate all the knowledge and guidance here.
I have been monitoring the performance of a couple of Heavy Forwarders and identified something I found weird in the Monitoring Console > Indexing > Performance > Indexing Performance: Advanced > Spl... See more...
I have been monitoring the performance of a couple of Heavy Forwarders and identified something I found weird in the Monitoring Console > Indexing > Performance > Indexing Performance: Advanced > Splunkd Thread Activity panel. When I looked at the metrics.log for the httpclientpollingthread_<deploymentclient-guid> thread, the ratio value was a constant "1.00". I confirmed that deploymentclient.conf > [deployment-client] > phoneHomeIntervalInSecs is set to 300 seconds (5 minutes), so why would this polling thread to the Deployment Server be constantly running instead of just pausing in between phone home intervals? I can't find any information online or in the documentaion about this thread and the connection between these Heavy Forwarders and the Deployment Server is working.
Hello Team, I am building splunk add-on app. I want to show a few fields based on Cloud or OnPrem installation.  As example If my add-on installs on cloud instance then It should show username ... See more...
Hello Team, I am building splunk add-on app. I want to show a few fields based on Cloud or OnPrem installation.  As example If my add-on installs on cloud instance then It should show username and password fields but if I install on onPrem then it should show email and password. Can you please guide how we can make it possible?
Hello, I've a tabular data like below and I want to color the cells based on the following threshold. 0%-79% ==> Red 80%-99% ==> Yellow 100% ==> Green I am applying below but without an... See more...
Hello, I've a tabular data like below and I want to color the cells based on the following threshold. 0%-79% ==> Red 80%-99% ==> Yellow 100% ==> Green I am applying below but without any success. I know that this is due to % symbol in the values. <format type="color" field="Passed Rules"> <colorPalette type="list">[#FF9999,#FFE699,#C6E0B4]</colorPalette> <scale type="threshold">80%,100%</scale> </format> So how can I apply colors over the cell? Thanks in advance. @ITWhisperer 
We use Axonius to pull in identities.  When creating the the search some of the values come in with the word "null".  Is there a way to remove the word null and leave blank?
Hi All,   I have a blanket index cluster and a cluster manager. I have added some config to push out an indexes.conf file with repFactor = auto. The config applyed and rolled out sucessfully ... See more...
Hi All,   I have a blanket index cluster and a cluster manager. I have added some config to push out an indexes.conf file with repFactor = auto. The config applyed and rolled out sucessfully to the peer nodes. However, when i look at the SH on the Cluster Manager under, settings > indexer clustering, I cannot see the indexes as replicated. On a clustered indexer, the config is as follows: /opt/splunk/etc/peer-apps/_cluster/local/indexes.conf         [hoopla] repFactor = auto homePath = $SPLUNK_DB/hoopla/db coldPath = $SPLUNK_DB/hoopla/colddb thawedPath = $SPLUNK_DB/hoopla/thaweddb             I've tried a few variations and am a bit puzzled - Am I doing anything massivley wrong here? Should i see empty indexes here?
I was running a search to display the last one week count for each notable and i used a query like this below index=notable | search_name = *abc-xyz* | eval date=strftime(_time, "%y-%m-%d) | stats ... See more...
I was running a search to display the last one week count for each notable and i used a query like this below index=notable | search_name = *abc-xyz* | eval date=strftime(_time, "%y-%m-%d) | stats count by date, search_name | chart sum(count) over date by search_name   For this i was receiving only 10 search_name values but it should be 15 in total for the rest it is showing as other, how should we remove the value called other and display the actual values 
SQL query is missing from the list of SQL data collector configuration. 
my field value name got modify. under network config field name. field value used to be "port 80 blocked" now it got changed to "port 80 not allowed"  I don't want to lose my history data.  is any... See more...
my field value name got modify. under network config field name. field value used to be "port 80 blocked" now it got changed to "port 80 not allowed"  I don't want to lose my history data.  is any way I can use SPL to make "port 80 blocked"  and "Port 80 not allowed"  same so I can show in chart all the data.
Hi, When running models on the machine learning toolkit, I am currently receiving the following error: Error in 'fit' command: (AttributeError) module 'time' has no attribute 'clock' How can ... See more...
Hi, When running models on the machine learning toolkit, I am currently receiving the following error: Error in 'fit' command: (AttributeError) module 'time' has no attribute 'clock' How can I resolve this issue? Thanks
I do have a multivalue field with the letters cls and tenant at the end of it. Is it possible to break the data into different rows.
Hi, I have application data on a Windows node that sends data to a log file. The data is formatted as json. On the Windows node i have a UF running (9.0.4) I want to send this data to a metrics... See more...
Hi, I have application data on a Windows node that sends data to a log file. The data is formatted as json. On the Windows node i have a UF running (9.0.4) I want to send this data to a metrics index, but i do not know if this is possible and how to format the inputs.conf For far i am unable to find any documentaion about this. Does anyone know if this is possble ? Regards, Harry
We are trying to implement the guidelines for enabling TLS Hostname verification (Configure TLS certificate host name validation - Splunk Documentation). This is working for most stanzas, but we are ... See more...
We are trying to implement the guidelines for enabling TLS Hostname verification (Configure TLS certificate host name validation - Splunk Documentation). This is working for most stanzas, but we are facing an issue with enabling the 'sslVerifyServerName' setting for kvstore. We already had custom certificates in place for all inter-Splunk communications. We are starting to enable the aditional verify-settings and in that process we started with the Deployment Server/SH Deployer. On that machine we created a server.conf with the following content:     [sslConfig] serverCert = /opt/splunk/etc/auth/customcerts/azsplunkdep1d.management.dev_chained.pem sslPassword = [REDACTED] sslRootCAPath = /opt/splunk/etc/auth/customcerts/splunk_ca.pem sslVerifyServerCert = true sslVerifyServerName = true cliVerifyServerName = true [kvstore] serverCert = /opt/splunk/etc/auth/customcerts/azsplunkdep1d.management.dev_chained.pem sslPassword = [REDACTED] sslVerifyServerCert = true sslVerifyServerName = true [search_state] sslVerifyServerCert = true sslVerifyServerName = true [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true     The server certificate is valid and contains the hostname and domainname of the server (CN=<host>.<domain>) If we start Splunk, the following lines appear in splunkd.log:     ERROR KVStoreConfigurationProvider [45937 KVStoreConfigurationThread] - Failed to start mongod on first attempt reason=Failed to receive response from kvstore error=, service not ready after waiting for timeout=304029ms ERROR KVStoreConfigurationProvider [45937 KVStoreConfigurationThread] - Could not start mongo instance. Initialization failed. ERROR KVStoreBulletinBoardManager [45937 KVStoreConfigurationThread] - KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.. ERROR KVStoreBulletinBoardManager [45937 KVStoreConfigurationThread] - Failed to start KV Store process. See mongod.log and splunkd.log for details. INFO MongodRunner [45938 MongodLogThread] - mongod exited normally (exit code 0, status: PID 45939 exited with code 0).     The Mongo log only contains logging saying it received a signal 15 (Terminated) and that it will shutdown accordingly.   If we remove (or disable) the 'sslVerifyServerName' setting under the kvstore-stanza, everything works fine. Is there anything we missed in this setup? What additional steps of checks are required to enable TLS Hostname verification for the KVStore? Thanks in advance, Oscar