All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

[Blog Post] Differentiate various size related parameters from indexes.conf maxTotalDataSizeMB maxGlobalRawDataSizeMB maxGlobalDataSizeMB maxDataSize maxVolumeDataSizeMB
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created pe... See more...
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created per Application and understand if this metric is actually useful and being used. Having 100,000's of custom metrics sitting on the controller is not good practice I would imagine. We are using a SaaS controller.  Thanks if anybody has an answer or suggestion.
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name valid... See more...
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name validation from the Splunk documentation and when I restart Splunk on this on prem deployment server it says : WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.   now I get the CLI command is cliVerifyServerName instead of sslVerifyServerName, but I even tried having both lines there and it still does not like it I have issued an Enterprise web certificate to this server, it is still valid for two years, so I am at a total loss here please help  
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transact... See more...
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transaction/card/purchase/tx_2994882028948/refund fi/transaction/card/purchase/tx_3920496893002/void fi/transaction/card/purchase/tx_2930540482198/refund   I'd like these all to read:  fi/transaction/card/purchase/trans/refund fi/transaction/card/purchase/trans/void fi/transaction/card/purchase/trans/refund   So replace the unique identifier, but maintain the verbiage at the end.  I've tried a few of the other methods noted in other threads, but to no avail. Some don't work at all, some run, but don't replace the values. Thanks!!
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stag... See more...
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stage, let’s take as an example. Wanting to collect only some logs from hosts using a universal forwarder, using the “Splunk_TA_nix” and setting up a “/local/inputs.conf”, cherry-picking a few sources/folders from default/inputs.conf seems reasonable. Though pushing the “entire” app to a UF seems a bit “overkill”. There should be lots of things not being used in a setup like this. Not all scripts will be needed etc. Then on the search heads, field extraction should be performed for these sources. I assume you need a/the app installed on the these as well for search time extraction. However, things like “/bin”, inputs.conf, outputs.conf, etc. seem unnecessary. Generally it seems like keeping excerpts from props.conf and transforms.conf could suffice? I should formulate a question which can be answered. Say I have a few logs/sources being indexed and searchable. On the search heads, assuming I am only interested in field extraction for the affected source types, no dashboards, reports, log collection on SH etc. Should I just install the entire application/TA regardless of how much or little of it will actually be used? Or, can I remove things which fills no purpose for function outlined? If so, what is the most efficient way to sort out which things to keep and what to discard (like will it always be enough to keep props.conf/transforms.conf)? Looking forward to your feedback regarding this, all the best
Hello Some users in my system does not have the data summary button (each one has different role) How can I enable the data summary for them ?   Thanks
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, i... See more...
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, is there a way to get all Custom Lists names, or Custom Lists' contents?  As a workaround I tried making request to "/rest/decided_list", but it doesn't return everything that is accessible through phantom itself. 
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified ... See more...
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified by Docker, it would ignore and won't activate the alert.   Appreciate the assistance!
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not... See more...
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not detecting the new version and offering to update.  It did pull the new version if I uninstalled and reinstalled the app, it just isn't detecting it as an update from the older version to the latest version.  The is a problem since my current customers aren't receiving this bugfix release.  Any help is appreciated.
I have a URL as below 1.aa/bb/cc/dd 2.nbcn/hbd/hvhd/hbxn   Need to regular expression to get the below output 1.aa/bb 2.nbcn/hbd
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in http... See more...
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/UsetheHTTPEventCollector) and it uses an HTTP Event Collector token:   input {       stdin       {             codec => json       } }   output {       http       {             format => "json"             http_method => "post"             url => "https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw"             headers => ["Authorization", "Splunk 26c964a2-c1e8-46e8-96ca-679d3b7542bd"]       }              stdout       {       } }     However, it fails with the following error:   [ERROR] 2022-07-01 14:28:12.794 [[main]>worker5] http - Could not fetch URL {:url=>"https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw", :method=>:post, :message=>"Connect to prd-p-kz4cj.splunkcloud.com:8088 [prd-p-kz4cj.splunkcloud.com/54.167.42.214] failed: connect timed out", :class=>Manticore::ConnectTimeout, :will_retry=>true}   And updating the url above to https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw produces a different error:   [ERROR] 2022-07-01 14:34:26.702 [[main]>worker4] http - Encountered non-2xx HTTP code 404 {:response_code=>404, :url=>"https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw", :event=>#<LogStash::Event:0x199312cb>}     Also, we have noticed that a ping to the Splunk server instance does not return a reply:   ping prd-p-kz4cj.splunkcloud.com PING prd-p-kz4cj.splunkcloud.com (54.167.42.214) 56(84) bytes of data.     Could it be that a server configuration setting is missing? Kind regards,       Moacir Silva
Hi, I have a bar chart and I need to put multiple labels on the x-axis. So. this is what I currently have:   I am being asked to put the necessary label on the legend also under eac... See more...
Hi, I have a bar chart and I need to put multiple labels on the x-axis. So. this is what I currently have:   I am being asked to put the necessary label on the legend also under each bar on the x-axis .... is that possible? Many thanks as always
Let's say we have bunch of frozen bucket files (db_<newest_time>_<oldest_time>_<localid>) on filesystem. How do we we find out which indexes these frozen buckets belong to? I looked into the files,... See more...
Let's say we have bunch of frozen bucket files (db_<newest_time>_<oldest_time>_<localid>) on filesystem. How do we we find out which indexes these frozen buckets belong to? I looked into the files, some are text files which don't seem to have strings or fields that could tell which index it is.
Hi,   I want to create an Alert which will trigger when any user created new alert or report in our environment. So could you please help me with suitable query for this.
Hello, I have an index that looks like that :     Server Month Number of connexions --------------------------------------- A January 10 B January 12 ... See more...
Hello, I have an index that looks like that :     Server Month Number of connexions --------------------------------------- A January 10 B January 12 C January 7 A February 5 B February C February 0     Let's say I sum the Number of connexions by Month, is there a way to raise an alert if a value is missing (here Server B in February) ?
Hi,  How can i delete the data in index after every one week? I came across Splunk answers and documents it is mentioned that we cannot completey clear index data as it depends on two attributes max... See more...
Hi,  How can i delete the data in index after every one week? I came across Splunk answers and documents it is mentioned that we cannot completey clear index data as it depends on two attributes maxTotalDataSizeMB and frozenTimePeriodInSecs . I am confused So i am using this index data in dashboard and basically i don't want to show the old data(P.s dealing with millions of user data so sorting/deduping is not an option).Now every week new data comes to the index so when it happens i have to clear the old data existing in my index . Irrespective of whether index has reached max default size or Frozen time period or maxHotSpanSecs  has reached its value how can i set the attributes in the index so that index data is empty every week? Thanks!
Hello,   I am getting error below when trying to search the data in Splunk SearchHead: ERROR SearchScheduler - The maximum number of concurrent historical searches for this user based on their... See more...
Hello,   I am getting error below when trying to search the data in Splunk SearchHead: ERROR SearchScheduler - The maximum number of concurrent historical searches for this user based on their role quota has been reached I tried to change the parameters as below scheduler max_searches_per_cpu from 25 to 50 search max_searches_per_cpu from 6 to 10 But too seems not working and even generating high LOAD. Could you please suggest how I can fix this issue? I am using Splunk Enterprise 8.0.5 version (I know its out of date  but still need help to figure out issue to plan upgrade)
Hi all, I have a set of data and i used stats(max) to get the maximum task number of every group. But the maximum number are not proper for most of the groups. I used the following query, | stats... See more...
Hi all, I have a set of data and i used stats(max) to get the maximum task number of every group. But the maximum number are not proper for most of the groups. I used the following query, | stats max(task_no) as "Latest Task " by group Can anyone tell is there anything should i add to get the maximum value?
Hello All, Just wanted to ask if there is any way to pull a CSV file in a blob container using SAS URL and add it into a lookup table /kV store without being indexed?
I want to capture the below time stamp using "Time_Prefix's Regex." 20220207T111737.014+0800 There is no guarantee that version will always be in that format, so  can someone please help me with ... See more...
I want to capture the below time stamp using "Time_Prefix's Regex." 20220207T111737.014+0800 There is no guarantee that version will always be in that format, so  can someone please help me with a generic Regex ?