All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My doubt is that I can see,My Volume used today = 0 MB ( 0%  of quota ). Why It's showing as 0 MB, I tried many queries using search head and still not increasing the daily volume count. Inde... See more...
My doubt is that I can see,My Volume used today = 0 MB ( 0%  of quota ). Why It's showing as 0 MB, I tried many queries using search head and still not increasing the daily volume count. Indexer name splunk License expiration 24 Dec 2022, 19:55:17 Licensed daily volume 500 MB Volume used today 0 MB (0% of quota)   My understanding is that When I run any search query and it will fetch the data from the indexer, Which will be added to the Volume used data count. Could you please help me more as seems I am missing something here? Really sorry for this basic question as I am unable to get any help over the internet/Splunk documenation on this part.  
I have scheduled the dashboard via "Schedule PDF" option , and i use to get mail everyday, but suddenly it got stopped receiving the dashboard PDF report to my mail. how to trouble shoot the issue... See more...
I have scheduled the dashboard via "Schedule PDF" option , and i use to get mail everyday, but suddenly it got stopped receiving the dashboard PDF report to my mail. how to trouble shoot the issue???
Hello Splunkers,    I am trying to compare two multi value ID columns, and return true when at least of the values matches between these 2 ID columns.    For example:  ID1 ID2 Mat... See more...
Hello Splunkers,    I am trying to compare two multi value ID columns, and return true when at least of the values matches between these 2 ID columns.    For example:  ID1 ID2 Match 402830 602369 602369 244633 TRUE 402830 840317 602369 602369 244633 TRUE 152893 443482 602369 244633 FALSE 227213 244633 602369 244633 TRUE 422210 442824 602369 244633 FALSE   The question is how to create the Match column by comparing ID1 to ID2. They are both multi value fields, and one field could contain up to  25 values.  As long as there is one match between ID1 and ID2, the match returns TRUE.  I have tried match() and mvfind(), but haven't found any luck.    Thanks all! 
I know this seems obvious I'm searching 5 minutes back and alerting on the results every 1 minute so there is 4 minutes of over lap on each search.  But due to some internal issues the logs are not a... See more...
I know this seems obvious I'm searching 5 minutes back and alerting on the results every 1 minute so there is 4 minutes of over lap on each search.  But due to some internal issues the logs are not always indexed right on time so I can't to a 1 minute search for a 1 minute alert or I would for sure miss stuff. The alert is throttled to to suppress triggering for 5 minutes but this is missing alerts too.  Is there any way for the alert to be aware of a previous alert result and make a dynamic allow list?
I'm trying to get an accurate percentile representation from a dataset of hourly metrics, excluding outliers.  The dataset consists of user sessions by group of machines for each hour where there's a... See more...
I'm trying to get an accurate percentile representation from a dataset of hourly metrics, excluding outliers.  The dataset consists of user sessions by group of machines for each hour where there's a production and a DR set of machines.  On occasion, to validate DR, those machines are used as production so when those occasions occur, they drastically skew the percentiles of an otherwise low number of DR sessions in use. Data would be like so..... Environment-Group Day Hour Session Count   Environment-Group Day Hour Session Count   Prod-A            Monday 8:00 1000   DR-A Monday 8:00 10   Prod-A            Monday 12:00 1500   DR-A Monday 12:00 25   Prod-A            Monday 16:00 1300   DR-A Monday 16:00 15   Prod-A            Tuesday 8:00 1050   DR-A Tuesday 8:00 20   Prod-A            Tuesday 12:00 1600   DR-A Tuesday 12:00 30   Prod-A            Tuesday 16:00 1400   DR-A Tuesday 16:00 25   Prod-A            Wednesday 8:00 500 Outliers-low DR-A Wednesday 8:00 500 Outliers-high Prod-A            Wednesday 12:00 800 Outliers-low DR-A Wednesday 12:00 800 Outliers-high Prod-A            Wednesday 16:00 600 Outliers-low DR-A Wednesday 16:00 600 Outliers-high Prod-A            Thursday 8:00 1000   DR-A Thursday 8:00 15   Prod-A            Thursday 12:00 1500   DR-A Thursday 12:00 50   Prod-A            Thursday 16:00 1300   DR-A Thursday 16:00 30     For this data, I might have 30 days of data where each hourly metric is below 50 for a DR group but for 1 or two days in the month it might be in the hundreds or thousands and I'm trying to represent what the consumption looks like for the month, without skewing the numbers with a DR test event. Ideally I'd like to omit the top and bottom 1, 2 or 3 percent, then get percentiles from the remaining values. The link below shows an excel example of this type calculation, excluding top & bottom values from percentiles. Using the Percentile function while excluding outliers : excel (reddit.com) =PERCENTILE.INC(IF((Values>Min)*(Values<Max),Values),Percentile)   Thanks, Jim
I have a list of software installed in our environment but some of the software have several entries duplicated with the different versions. How do I clean up the list by removing the other versions ... See more...
I have a list of software installed in our environment but some of the software have several entries duplicated with the different versions. How do I clean up the list by removing the other versions and remaining with the latest version for each software. I need help with a query for this. The query must create a corrected list of the software.
I have a process that can generate one of two events: A = the process could not be completed, will try again later B = the process was completed There can be some instability, so it is to b... See more...
I have a process that can generate one of two events: A = the process could not be completed, will try again later B = the process was completed There can be some instability, so it is to be expected that the process can't be completed for a brief period but then is able to complete it. I want to send an alert only when there are just incomplete processes for the period. query results Alert ? A yes A B no B no (none) no   The question https://community.splunk.com/t5/Alerting/Alert-if-event-B-occurs-without-event-A/m-p/461075 seems to ask the same thing, but I am not sure it was answered.
Could Someone tell me the key difference between dashboard Studio vs Glass Table?  Where is Splunk headed with these two products? or are the two different products?    Additional question, could y... See more...
Could Someone tell me the key difference between dashboard Studio vs Glass Table?  Where is Splunk headed with these two products? or are the two different products?    Additional question, could you explain what visualization would be tabs in Dashboard Studio or Glass Table?  reference to categories/segmenting the viz would be appreciated with these types of dashboards.
My dashboard uses custom variables to fill in dates in the section headers. When I export as a PDF from the UI it works fine: When I schedule the PDF for e-mail delivery, the custom variable i... See more...
My dashboard uses custom variables to fill in dates in the section headers. When I export as a PDF from the UI it works fine: When I schedule the PDF for e-mail delivery, the custom variable isn't populated, and my file looks like this: How can I get the label to populate correctly on the e-mailed version of the dashboard?
I have a playbook that adds a row to a custom list for each task that can't be processed at runtime, and I'm building a second timer-driven playbook that should retry each of those actions. Each row ... See more...
I have a playbook that adds a row to a custom list for each task that can't be processed at runtime, and I'm building a second timer-driven playbook that should retry each of those actions. Each row has five columns, four for the values needed to attempt the action and a counter that should be incremented for each retry (after five tries, it should remove the row and alert that the task can't be performed automatically).  I can use phantom.get_list() (and capturing only the third element, which is the list contents) to get the contents of the custom list into the retry playbook as a Python list, but I'm having trouble coming up with a way to iterate through them. I've tried the recommendation in another question/answer (https://community.splunk.com/t5/Splunk-SOAR-f-k-a-Phantom/How-do-you-achieve-quot-for-quot-loops/m-p/615841), but passing the retrieved list from a code block into a format block with  %% {0} %% as the format, then doing a python.debug on format_1:formatted_data.* just returns the monolithic list once. The behavior I need is for it to spin up the code block for each row of the incoming list. Is this possible with Phantom? If so, is this approach correct, and what might I be doing wrong here?
Hi Everyone, Has anyone every tried to migrate a single index in an existing Smartstore clustered indexer environment to a new S3 bucket? For compliance purposes, I need to use an internal S3 com... See more...
Hi Everyone, Has anyone every tried to migrate a single index in an existing Smartstore clustered indexer environment to a new S3 bucket? For compliance purposes, I need to use an internal S3 compatible environment.  I now need to divide my single bucket environment into multiple S3 buckets.     For example: [volume:s3] path = s3://bucket4all   into: [volume:s3] path = s3://bucket4all [volume:s3index1] path = s3://bucket4index1 [volume:s3index2] path = s3://bucket4index2 (etc...)   Not even sure this is possible... Thanks
Hello splunkers,    How can I use tab-completion & command history in the python that is packaged with Splunk?  The python version [./bin/splunk cmd python] with Splunk Enterprise v9 is 3.7.11 Ho... See more...
Hello splunkers,    How can I use tab-completion & command history in the python that is packaged with Splunk?  The python version [./bin/splunk cmd python] with Splunk Enterprise v9 is 3.7.11 However, there is no tab-completion or command history..  tab is interpreted as 4 whitespace, while up/down arrow key is interpreted as ^[[A or ^[[B.  even simple cursor positioning using right/left arrow keys are interpreted as [[D^ OR [[C^..      (dev2) splunk@host1:~ $ ./bin/python3 Python 3.7.11 (default, Jul 27 2022, 02:48:51) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ^[[A File "<stdin>", line 1 ^ SyntaxError: invalid syntax >>> ^[[B File "<stdin>", line 1 ^ SyntaxError: invalid syntax >>>     This is a simple requirement to have quick & dirty troubleshooting for python commands. Its a major pain to not have access to history or not being able to use L/R arrow keys to move cursor. Please help.  Thanks in advance!
Hello, I'm a Splunk Cloud admin who has the following challenge: I want to segregate the access of multiple teams within the company so they can only R/W the reports, alerts, and dashboards that ar... See more...
Hello, I'm a Splunk Cloud admin who has the following challenge: I want to segregate the access of multiple teams within the company so they can only R/W the reports, alerts, and dashboards that are owned by such teams. My idea is to create an app for each team. Let's use this team structure for example: SOC Team AppSec Team R&D Team   First, I would create the following roles: SOC AppSec R&D Second, I would create the following apps and attach the roles like this: SOC (SOC Role has R/W access, others have NO access) AppSec (AppSec Role has R/W access, others have READ only) R&D Role (R&D Role has R/W access, others have READ only)   With this implemented, each team will be able to creates alerts/dashboards/etc with the permission "shared in app" and this won't affect the other teams.   Is there any issue/limitation with this approach? I did not spot any issue.
I created an alert on scheduled job whenever the count is greater than 1. It is supposed to trigger an alert but it is not triggering. Can someone help me with this 
I know that I can get the current size of an accelerated data model using REST or just using the web GUI under settings "Data models", but how can I see the historical (disk) size of the accelerated ... See more...
I know that I can get the current size of an accelerated data model using REST or just using the web GUI under settings "Data models", but how can I see the historical (disk) size of the accelerated data model over time?
Due to an administrative decision, we have "inherited" an independent Splunk installation (as opposed to our "core" system).  This system is at 9.0.  Our existing system is at 8.2.4.  We need to hook... See more...
Due to an administrative decision, we have "inherited" an independent Splunk installation (as opposed to our "core" system).  This system is at 9.0.  Our existing system is at 8.2.4.  We need to hook the 9.0 system into our existing license master (which is also a deployment master).  Of course this won't work due to the version mismatch.  Due to the urgency of this, is it possible to upgrade our license/deployment master to 9.1, leaving the rest of our existing servers at 8.2.4, so we do not need to move our planned upgrade from late January to now?
Hi, We are trying to install & Configure SAP Solman Technology Add-on (https://splunkbase.splunk.com/app/4301) and connect to SAP SolMan with Splunk. While saving configuration we are getting below... See more...
Hi, We are trying to install & Configure SAP Solman Technology Add-on (https://splunkbase.splunk.com/app/4301) and connect to SAP SolMan with Splunk. While saving configuration we are getting below error message:- HTTP 400 Bad Request ! Error connecting to SAP ODATA endpoint at https://xxxxx:44300/sap/opu/odata/sap - UnexpectedHTTPResponse 0 failed to build secure connection to xxxxx! We validated certificate on Splunk HF and tried to connect using curl command & it shows successful connection with 200 OK but getting error message in Add-on. Did anyone face similar issue or can anyone help us on the same? Thanks in Advance.
Hi. I have added over 130 AWS accounts via AWS Integrations into Splunk SignalFX. This has been done via terraform.  I wanted to ask would it be possible to add custom tags during data transfer f... See more...
Hi. I have added over 130 AWS accounts via AWS Integrations into Splunk SignalFX. This has been done via terraform.  I wanted to ask would it be possible to add custom tags during data transfer from AWS to Splunk? Example: I have one AWS account which is being used by specific team, team X. Unfortunately, this team didn't set AWS tags properly, and I would like to be able to filter all the resources which are coming from this team X. Instead forcing this team X to add this tag to all of the resources they have in their AWS account, I was wondering if it would be possible to add custom Tags before these resources come to Splunk SignalFx. This way when I want to filter these resources, I can specify this custom tag: team - TeamX and get all the resources f
Hi All, How to find more than 3 heartbeat failure with failure reason from same host in a day  and put in a table? I am currently using below search, Index="my index" sourcetype="my sourcetype... See more...
Hi All, How to find more than 3 heartbeat failure with failure reason from same host in a day  and put in a table? I am currently using below search, Index="my index" sourcetype="my sourcetype" action="heartbeatfailure"  |bucket _time span=day |stats count by _time host action failure_reason |where count>2  As the failure reason is different i am unable to get result for the same host in  past 24 hrs.  How to get stats count by _time, host, action with failure_reason in same table?
Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I hav... See more...
Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I have created the below query, but its not working as expected.  index="main" AND source=jira | spath | eval summary=if(match (fields.summary,".DT-"),"Security Incident","no") Please advise. -- Thanks, Siddarth