All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My dashboard uses custom variables to fill in dates in the section headers. When I export as a PDF from the UI it works fine: When I schedule the PDF for e-mail delivery, the custom variable i... See more...
My dashboard uses custom variables to fill in dates in the section headers. When I export as a PDF from the UI it works fine: When I schedule the PDF for e-mail delivery, the custom variable isn't populated, and my file looks like this: How can I get the label to populate correctly on the e-mailed version of the dashboard?
I have a playbook that adds a row to a custom list for each task that can't be processed at runtime, and I'm building a second timer-driven playbook that should retry each of those actions. Each row ... See more...
I have a playbook that adds a row to a custom list for each task that can't be processed at runtime, and I'm building a second timer-driven playbook that should retry each of those actions. Each row has five columns, four for the values needed to attempt the action and a counter that should be incremented for each retry (after five tries, it should remove the row and alert that the task can't be performed automatically).  I can use phantom.get_list() (and capturing only the third element, which is the list contents) to get the contents of the custom list into the retry playbook as a Python list, but I'm having trouble coming up with a way to iterate through them. I've tried the recommendation in another question/answer (https://community.splunk.com/t5/Splunk-SOAR-f-k-a-Phantom/How-do-you-achieve-quot-for-quot-loops/m-p/615841), but passing the retrieved list from a code block into a format block with  %% {0} %% as the format, then doing a python.debug on format_1:formatted_data.* just returns the monolithic list once. The behavior I need is for it to spin up the code block for each row of the incoming list. Is this possible with Phantom? If so, is this approach correct, and what might I be doing wrong here?
Hi Everyone, Has anyone every tried to migrate a single index in an existing Smartstore clustered indexer environment to a new S3 bucket? For compliance purposes, I need to use an internal S3 com... See more...
Hi Everyone, Has anyone every tried to migrate a single index in an existing Smartstore clustered indexer environment to a new S3 bucket? For compliance purposes, I need to use an internal S3 compatible environment.  I now need to divide my single bucket environment into multiple S3 buckets.     For example: [volume:s3] path = s3://bucket4all   into: [volume:s3] path = s3://bucket4all [volume:s3index1] path = s3://bucket4index1 [volume:s3index2] path = s3://bucket4index2 (etc...)   Not even sure this is possible... Thanks
Hello splunkers,    How can I use tab-completion & command history in the python that is packaged with Splunk?  The python version [./bin/splunk cmd python] with Splunk Enterprise v9 is 3.7.11 Ho... See more...
Hello splunkers,    How can I use tab-completion & command history in the python that is packaged with Splunk?  The python version [./bin/splunk cmd python] with Splunk Enterprise v9 is 3.7.11 However, there is no tab-completion or command history..  tab is interpreted as 4 whitespace, while up/down arrow key is interpreted as ^[[A or ^[[B.  even simple cursor positioning using right/left arrow keys are interpreted as [[D^ OR [[C^..      (dev2) splunk@host1:~ $ ./bin/python3 Python 3.7.11 (default, Jul 27 2022, 02:48:51) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ^[[A File "<stdin>", line 1 ^ SyntaxError: invalid syntax >>> ^[[B File "<stdin>", line 1 ^ SyntaxError: invalid syntax >>>     This is a simple requirement to have quick & dirty troubleshooting for python commands. Its a major pain to not have access to history or not being able to use L/R arrow keys to move cursor. Please help.  Thanks in advance!
Hello, I'm a Splunk Cloud admin who has the following challenge: I want to segregate the access of multiple teams within the company so they can only R/W the reports, alerts, and dashboards that ar... See more...
Hello, I'm a Splunk Cloud admin who has the following challenge: I want to segregate the access of multiple teams within the company so they can only R/W the reports, alerts, and dashboards that are owned by such teams. My idea is to create an app for each team. Let's use this team structure for example: SOC Team AppSec Team R&D Team   First, I would create the following roles: SOC AppSec R&D Second, I would create the following apps and attach the roles like this: SOC (SOC Role has R/W access, others have NO access) AppSec (AppSec Role has R/W access, others have READ only) R&D Role (R&D Role has R/W access, others have READ only)   With this implemented, each team will be able to creates alerts/dashboards/etc with the permission "shared in app" and this won't affect the other teams.   Is there any issue/limitation with this approach? I did not spot any issue.
I created an alert on scheduled job whenever the count is greater than 1. It is supposed to trigger an alert but it is not triggering. Can someone help me with this 
I know that I can get the current size of an accelerated data model using REST or just using the web GUI under settings "Data models", but how can I see the historical (disk) size of the accelerated ... See more...
I know that I can get the current size of an accelerated data model using REST or just using the web GUI under settings "Data models", but how can I see the historical (disk) size of the accelerated data model over time?
Due to an administrative decision, we have "inherited" an independent Splunk installation (as opposed to our "core" system).  This system is at 9.0.  Our existing system is at 8.2.4.  We need to hook... See more...
Due to an administrative decision, we have "inherited" an independent Splunk installation (as opposed to our "core" system).  This system is at 9.0.  Our existing system is at 8.2.4.  We need to hook the 9.0 system into our existing license master (which is also a deployment master).  Of course this won't work due to the version mismatch.  Due to the urgency of this, is it possible to upgrade our license/deployment master to 9.1, leaving the rest of our existing servers at 8.2.4, so we do not need to move our planned upgrade from late January to now?
Hi, We are trying to install & Configure SAP Solman Technology Add-on (https://splunkbase.splunk.com/app/4301) and connect to SAP SolMan with Splunk. While saving configuration we are getting below... See more...
Hi, We are trying to install & Configure SAP Solman Technology Add-on (https://splunkbase.splunk.com/app/4301) and connect to SAP SolMan with Splunk. While saving configuration we are getting below error message:- HTTP 400 Bad Request ! Error connecting to SAP ODATA endpoint at https://xxxxx:44300/sap/opu/odata/sap - UnexpectedHTTPResponse 0 failed to build secure connection to xxxxx! We validated certificate on Splunk HF and tried to connect using curl command & it shows successful connection with 200 OK but getting error message in Add-on. Did anyone face similar issue or can anyone help us on the same? Thanks in Advance.
Hi. I have added over 130 AWS accounts via AWS Integrations into Splunk SignalFX. This has been done via terraform.  I wanted to ask would it be possible to add custom tags during data transfer f... See more...
Hi. I have added over 130 AWS accounts via AWS Integrations into Splunk SignalFX. This has been done via terraform.  I wanted to ask would it be possible to add custom tags during data transfer from AWS to Splunk? Example: I have one AWS account which is being used by specific team, team X. Unfortunately, this team didn't set AWS tags properly, and I would like to be able to filter all the resources which are coming from this team X. Instead forcing this team X to add this tag to all of the resources they have in their AWS account, I was wondering if it would be possible to add custom Tags before these resources come to Splunk SignalFx. This way when I want to filter these resources, I can specify this custom tag: team - TeamX and get all the resources f
Hi All, How to find more than 3 heartbeat failure with failure reason from same host in a day  and put in a table? I am currently using below search, Index="my index" sourcetype="my sourcetype... See more...
Hi All, How to find more than 3 heartbeat failure with failure reason from same host in a day  and put in a table? I am currently using below search, Index="my index" sourcetype="my sourcetype" action="heartbeatfailure"  |bucket _time span=day |stats count by _time host action failure_reason |where count>2  As the failure reason is different i am unable to get result for the same host in  past 24 hrs.  How to get stats count by _time, host, action with failure_reason in same table?
Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I hav... See more...
Hello, My requirement is if the field "fields.summary" contains events that contain ".DT", then I want to create a new field "Summary", and set the value of the new field as Security Incident. I have created the below query, but its not working as expected.  index="main" AND source=jira | spath | eval summary=if(match (fields.summary,".DT-"),"Security Incident","no") Please advise. -- Thanks, Siddarth
I am trying to create an alert that triggers when the location field of a login event from a user changes. so if a user logged in from London earlier and then the next login comes from Dublin, I want... See more...
I am trying to create an alert that triggers when the location field of a login event from a user changes. so if a user logged in from London earlier and then the next login comes from Dublin, I want an alert to trigger. The login event has a username and client.geoLocation.city field.
I run large searches at the start of each month. Generally I use the saved search commands to retrieve the results on dashboards - e.g. | savedsearch report_name. However, we sometimes use outputlook... See more...
I run large searches at the start of each month. Generally I use the saved search commands to retrieve the results on dashboards - e.g. | savedsearch report_name. However, we sometimes use outputlookup at the end of the search and inputlookup to retrieve the data on the dashboard - e.g. | outputlookup report_file.csv.  I have recently had some issues with saved search: jobs being deleted that causes my saved searches disappear For saved searches to be refreshed the report needs to rescheduled and run again Odd behaviour with reports running but data not actually being picked up by dashboards These issues do not apply to outputlookup reports which can more easily be re-run and also can easily be edited with lookup editor if required. Can anybody tell me which is more efficient to use and should be the default option? Are there any advantages and disadvantages to either command I have not considered?  
I want to run some commands on my splunk Heavy forwarder servers and output the results to a folder. I want to monitor these folders and push the data to Splunk indexers. Is my only option installing... See more...
I want to run some commands on my splunk Heavy forwarder servers and output the results to a folder. I want to monitor these folders and push the data to Splunk indexers. Is my only option installing Universal forwarders on the same server or configuring inputs and outputs.conf ?
Requirement : Call REST APIs and ingest the data into Splunk to specified indexes As of now, we are using Splunk Add on Builder Application to create apps for REST API calls and importing the data ... See more...
Requirement : Call REST APIs and ingest the data into Splunk to specified indexes As of now, we are using Splunk Add on Builder Application to create apps for REST API calls and importing the data into Splunk. Limitation with this approach : We are not able to call an API dynamically  or on ad-hoc basis only when needed. Team wants to have an UI to call the REST APIs dynamically and show this data into a dashboard. Is there any way in Splunk to provide this capability ?
i have 2 csv file  first one has name and id second one has the id only i can extract the common id but i couldn’t find the query to show the common name using the id any body can help please?!
Hello, After upgrading Splunk version from 8.1.5 to 9.0 we are getting indexing not ready error in Splunk deployment server.  Anything we need to perform in indexer clustering. What is the soluti... See more...
Hello, After upgrading Splunk version from 8.1.5 to 9.0 we are getting indexing not ready error in Splunk deployment server.  Anything we need to perform in indexer clustering. What is the solution? Can anyone help.  
Hi Community,   I have a search query where I am trying to get values for the search from the results of another query. index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" ... See more...
Hi Community,   I have a search query where I am trying to get values for the search from the results of another query. index=_internal [ `set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | search pool = "*" | search h = hp742srv OR dell970srv OR dell428srv OR hp548srv OR dell429srv OR dell477srv OR dell433srv | timechart span=1d sum(b) AS volumeB by idx fixedrange=false limit=30 | join type=outer _time [ search index=_internal [ `set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]  The search statement in line number 9 has a list of host names which I have entered manually using the OR operator. The below query can generate the list of results but I am not able to use the result in the above query. index=mx_logs "mx.env"="dell1192srv.fr.mx.com:15022" | table host | dedup host  How can I use the results from the 2nd query dynamically in the first SPL query? Thanks in advance.   Regards, Pravin
From AWS storage we are already getting data into a territory specific instance.(example :Singapore-On-prem). Now i want the same data in Singapore instance as well as in global instance(Cloud). Ho... See more...
From AWS storage we are already getting data into a territory specific instance.(example :Singapore-On-prem). Now i want the same data in Singapore instance as well as in global instance(Cloud). How can i do this? Can anyone suggest any solution and if there is a solution , then what could be the potential roadblocks that i might face while trying the solution.