All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Is it possible to have the annotation for only one chart when view by Trellis, for example in the image below, I only want to have notation for the first chart. Knowing that: - I'm usi... See more...
Hi, Is it possible to have the annotation for only one chart when view by Trellis, for example in the image below, I only want to have notation for the first chart. Knowing that: - I'm using the visualization by Trellis. - The annotation is now established by adding this part in the XML source.     <search type="annotation"> <query>| makeresults | eval _time="2022-09-20 00:00:00", message="Change equipement", type="type1" | eval annotation_label = message </query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.annotation.categoryColors">{"type1":"0xffcc00"}</option>     - The list of id with the date of changement is stored in a lookup csv I did this but it show annotation for every chart   <search type="annotation"> <query>|inputlookup list_id.csv |search NoEq=$id$ | eval _time=_time, message="Change", type="type1" | eval annotation_label = message </query> <earliest>0</earliest> <latest></latest> </search>  
We have a stand alone Splunk v8.2.0 deployment. 16vcpu/32GB memory. From other posts here I started looking through the MC and in "Indexing Performance: Advanced" I found that the "summarydirectors... See more...
We have a stand alone Splunk v8.2.0 deployment. 16vcpu/32GB memory. From other posts here I started looking through the MC and in "Indexing Performance: Advanced" I found that the "summarydirectorsearchexecutorworker" is constantly running above 92% 100% of the time. How can I bring this down?
I am aware that it is pretty simple to programmatically add new apps to Splunk's Deployment Server, as you must simply drop them in the right directory and invoke a reload of the service. For server... See more...
I am aware that it is pretty simple to programmatically add new apps to Splunk's Deployment Server, as you must simply drop them in the right directory and invoke a reload of the service. For server classes however, this does not seem so trivial.  The official documentation asks us to add apps to a server class manually, from the user interface.  This I don't like much as it is error prone and non reproducible. Is there any way (or magic location in the Deployment Server's filesystem, like for the apps), where I can drop the configuration of a server class, for it to be loaded in the graphical UI automatically?  This will allow me to configure some simple Ansible/Puppet/Chef... task to take care of those files.
I don't seem to be able to integrate two radio buttons on a single dashboard to achieve the following selection logic.  Each radio button has two options and an "OR" condition on the 2nd radio button... See more...
I don't seem to be able to integrate two radio buttons on a single dashboard to achieve the following selection logic.  Each radio button has two options and an "OR" condition on the 2nd radio button.  The choice from the 1st radio button drives the 2nd radio button to display 1 of 2 different ways.  Drawing of selection logic follows:                                                                         RB1                                                           State1        State2                                                 RB2                 OR                 RB2                                  State1a       State1b            State2a         State2b Explanation of drawing: In RB1, if State1 is selected, RB2 displays State1a and State1b for selection OR if State2 is selected in RB1 then RB2 displays State2a and State2b for selection. Is this possible, if so, how can it be accomplished on a dashboard in simple XML?
Hi, I have below splunk command: | makeresults | eval _raw="The first value is 0.00 and The second value is 0\",\"origin\":\"rep\",\"source_instance\":\"0\"" | rex "The\sfirst\svalue\sis (?<from>... See more...
Hi, I have below splunk command: | makeresults | eval _raw="The first value is 0.00 and The second value is 0\",\"origin\":\"rep\",\"source_instance\":\"0\"" | rex "The\sfirst\svalue\sis (?<from>.*) and\sThe\ssecond\svalue\sis (?<to>.*)"   This shows the "from" field as 0.00 and "to" field as 0","origin":"rep","source_instance":"0" In the "to" field I only want the value 0. How do I achieve that?
Good morning\afternoon\evening community! I've met an issue with detecting vpn tunnel interface statuses which is identified by ping data inputs can you give some ideas on how to organize the se... See more...
Good morning\afternoon\evening community! I've met an issue with detecting vpn tunnel interface statuses which is identified by ping data inputs can you give some ideas on how to organize the search to print table like below ? on first table represented the logic of detecting the status of tunnel   Thanks in advance, for any response!    
I need to create a alert for service for but real time alert are disabled by admin, now i need to create a alert that if my service got bad service alert more then 5 it will send me mail immediately,... See more...
I need to create a alert for service for but real time alert are disabled by admin, now i need to create a alert that if my service got bad service alert more then 5 it will send me mail immediately, i created alert but alert is sending email at the end of time range cycle like in cron expression i set Time range:- "last 30 minutes"  Cron expression :- */30 * * * * expires in 24 hours it is running and giving email also but not on alert time but at the end of cycle after 30 min, is there any way to make it trigger alert on same time as alert coming. Please help me...
Good evening, We are unable to send data to the Splunk Cloud trial instance. To send data to http event collector , we referred this document and sent a post request to the URL https://http-inp... See more...
Good evening, We are unable to send data to the Splunk Cloud trial instance. To send data to http event collector , we referred this document and sent a post request to the URL https://http-inputs-<hostname>.splunkcloud.com:8088/services/collector/event from postman and got "getaddrinfo ENOTFOUND" as error.  On trying post request to the URL https://<hostname>.splunkcloud.com:8088/services/collector/event , we got "Error: Request timed out".  Is the documentation wrong? How do I get this working?
I can not access splunk UBA web interface, It's on a single Linux server and pass the precheck and setup is done , I check the caspida status its going OK.  Is it because of firewall rule, docker or... See more...
I can not access splunk UBA web interface, It's on a single Linux server and pass the precheck and setup is done , I check the caspida status its going OK.  Is it because of firewall rule, docker or another ? please help me.  browser : chrome / firefox my os : RHEL 8.5 UBA version : 5.1.0 network eth0   thanks for help   UPDATE ! This is my firewall warning  
hello , i wanted to add a drilldown, where if any user clicks on the column name a new dashboard should open.   col1  col2  col3 1          2          3 4          5          6   if user ... See more...
hello , i wanted to add a drilldown, where if any user clicks on the column name a new dashboard should open.   col1  col2  col3 1          2          3 4          5          6   if user clicks on only col2 name it should open a new dashboard
I am trying to get data into Splunk APM from my python and Golang applications deployed in Kubernetes. But I am not able to see the data from both app.  I also tried running the app on Linux System... See more...
I am trying to get data into Splunk APM from my python and Golang applications deployed in Kubernetes. But I am not able to see the data from both app.  I also tried running the app on Linux System but from there also not getting any data. 
Hey people, my requirement is as such I have extracted these columns from my data using the query    my query | rex "filterExecutionTime=(?<FET>[^,]+)" | rex "ddbWriteExecutionTime=(?<ddbE... See more...
Hey people, my requirement is as such I have extracted these columns from my data using the query    my query | rex "filterExecutionTime=(?<FET>[^,]+)" | rex "ddbWriteExecutionTime=(?<ddbET>[^)]+)" | rex "EXECUTION_TIME : (?<totalTime>[^ ms]+)" | eval buildAndTearDowTime=(tonumber(FET)) + (tonumber(ddbET)) |table totalTime FET ddbET buildAndTearDownTime     I want to have buildAndTearDown as totalTime - (FET+ ddbET)   once I have all the three values required (FET, ddbET, buildAndTearDown) I want to put these values in a pie chart.   Thanks  
We have a distributed deployment consisting of  2 Search heads, 1 indexer, Deployment server, 2 Heavy Forwarders, Universal Forwarders and a Syslog server. We need to shut it down and then boot it ba... See more...
We have a distributed deployment consisting of  2 Search heads, 1 indexer, Deployment server, 2 Heavy Forwarders, Universal Forwarders and a Syslog server. We need to shut it down and then boot it back up. What is the best sequence to shutdown and boot up the environment gracefully?  Also anything to keep in mind while doing so to avoid errors. 
Based on the article provided below we have updated our Atlassian settings to pull the Bitbucket logs into our Audit Logs hence we want to how can get them ingested into Splunk. So do we any specif... See more...
Based on the article provided below we have updated our Atlassian settings to pull the Bitbucket logs into our Audit Logs hence we want to how can get them ingested into Splunk. So do we any specific add-on to get this audit logs pulled and ingested into Splunk? Or how do we get them integrated and get them ingested into Splunk. Article: https://bitbucket.org/blog/bitbucket-audit-logs-are-now-available-in-atlassian-access https://support.atlassian.com/security-and-access-policies/docs/track-organization-activities-from-the-audit-log/ So can anyone help me on this requirement.
hi all, we  are creating one dashboard having two tables , in that we have set different folder locations for monitoring.  BAU table 1 query = source="F:\\Logshipping\\Export\\BAU\\*" host="FinIQ... See more...
hi all, we  are creating one dashboard having two tables , in that we have set different folder locations for monitoring.  BAU table 1 query = source="F:\\Logshipping\\Export\\BAU\\*" host="FinIQDB-DR" index="index_bau" EP_ER_QuoteRequestId= * EP_ER_QuoteRequestId != "EP_ER_QuoteRequestId"| dedup EP_ER_QuoteRequestId| table EP_ER_QuoteRequestId, orderStatus,EP_ExternalOrderId,ER_Created_At,ER_Created_By,EP_Order_Requested_At,EP_Order_Response_At,ER_Type,EP_ordertype,ER_UnderlyingCode,ER_LimitPrice1,ER_LimitPrice2,ER_LimitPrice3 ,source   DR table 2 query = source="F:\\Logshipping\\Export\\DR\\*" host="FinIQDB-DR" index="index_dr" EP_ER_QuoteRequestId= * EP_ER_QuoteRequestId != "EP_ER_QuoteRequestId"| dedup EP_ER_QuoteRequestId| table EP_ER_QuoteRequestId, orderStatus,EP_ExternalOrderId,ER_Created_At,ER_Created_By,EP_Order_Requested_At,EP_Order_Response_At,ER_Type,EP_ordertype,ER_UnderlyingCode,ER_LimitPrice1,ER_LimitPrice2,ER_LimitPrice3 ,source ** Screenshot   1. We are getting updated records  in BAU table ,whenever file is updated into folder  2. We are not able to get updated records in DR table , when file is updated, in that case we have to  delete an index and re-create it .  then new records are populated in the grid. thanks.                  
Hi, Could anyone please get this Force Directed App version 3.1.0 in  apk or spl format. https://splunkbase.splunk.com/app/3767   Thanks 
I'm trying to create a dashboard to get the data of when kvstore was restarted
I have a event like this 02.09.2022; seller david address 434 xyz house price 20000  [color:green] {noffloors: 5] status sold 02.09.2022; seller lenin address 222 abc  house price 30000  [color:r... See more...
I have a event like this 02.09.2022; seller david address 434 xyz house price 20000  [color:green] {noffloors: 5] status sold 02.09.2022; seller lenin address 222 abc  house price 30000  [color:red] {noffloors: 7] status sold Assuming address, price, color and noffloor are not indexed as fields. How do I obtain output like this ? I am thinking of using regex but i dnt know the exact experssion address     price      color      nofloor 434 zyz    20000   green      5 222 abc    30000  red            7  
I have two lookups RLQuotas: Endpoint, Endpoint Name, filter, quota, Window RLFilters: Attribute, filter I want to loop through all the endpoints. all endpoints have a specific window, quota and... See more...
I have two lookups RLQuotas: Endpoint, Endpoint Name, filter, quota, Window RLFilters: Attribute, filter I want to loop through all the endpoints. all endpoints have a specific window, quota and filter and i am searching it based on filter attribute I want output fields Endpoint Name, filter, Quota This is the query i came up with | inputlookup ID-RL-Quotas | lookup ID-RL-Filters Filter | fields Endpoint, "Endpoint Name", Attribute, Window, Quota, Filter | rename "Endpoint Name" as EndpointName | map [| eval Window = tonumber($Window$) | search sourcetype="some" http_url = "$Endpoint$" minutesago=Window | eval ip = mvindex(split(http_remoteip,","),0) | eval EndpointName = "$EndpointName$" | eval WindowI ="$Window$" | eval QuotaI="$Quota$" | eval FilterI="$Filter$" | search $Attribute$ = "*" | stats values(EndpointName) as "Endpoint Name", values(FilterI) as Filter, values(WindowI) as Window, values(QuotaI) as Quota, count by $Attribute$ | where count >= 0.8 * $Quota$ | sort -count] maxsearches=10000 This only gives me one filter output not all
Can someone help to get the Splunk universal forwarder for AIX 5.3 thanks!