All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, community members   I am trying to write a query that looks like this: | dbxquery query="select VULNERABILITY_LIFECYCLE, SOURCE, CLOSURE_FY, CLOSURE_QUARTER, CLOSURE_DATE from table [...]" |... See more...
Hi, community members   I am trying to write a query that looks like this: | dbxquery query="select VULNERABILITY_LIFECYCLE, SOURCE, CLOSURE_FY, CLOSURE_QUARTER, CLOSURE_DATE from table [...]" | eval MONTH=strftime(strptime(CLOSURE_DATE,"%Y-%m-%d %H:%M:%S"),"%m") | eval SURSA = if(SOURCE!="QUALYS-P","Confirmed", "Potential") | chart count over MONTH by SURSA   My problem is that I want this chart to represent a financial year, not a calendar year. How can I do this? (also,  without skipping months)   Thank you for your support, Daniela
Hi, I got this dashboard in which I work with tokens to select specific customer data. By selecting the correct customer, the drop down menu pushes a customer number (formatted like 30001, 30002,...... See more...
Hi, I got this dashboard in which I work with tokens to select specific customer data. By selecting the correct customer, the drop down menu pushes a customer number (formatted like 30001, 30002,... 30999) to all searches within the dashboard. This is working flawless. Now I created a new search within that same dashboard that only uses the trailing number (1,2, ... 999). If I test this in a normal search by pushing the customer id via an eval into the cust_id field it works. But the same search with tokens in a dashboards returns the error "wating on input". Can't debug it either because there is no sid.   Working search:   index=x sourcetype=x | eval regid=30001 | eval cust_id=$regid$ | rex field=cust_id mode=sed "s/(^30*)//1" | where port==$cust_id$ | top limit=1 port     NOT working dashboard search:   index=x sourcetype=x | eval cust_id=$regid$ | rex field=cust_id mode=sed "s/(^30*)//1" | where port==$cust_id$ | top limit=1 port      
Hi, is it possible to get the otp dynamically in synthetic job in selenium script in AppDynamics? Regards, Madhu
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has false positive because it is natural to become high d... See more...
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has false positive because it is natural to become high duration in some point. It’s not normal when it continues high duration. e.g.  normal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   abnormal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[50.01] 00:01:00.000 WARNING duration[90.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   1-how can I detect abnormal condition with splunk? (Best way with minimum false positive on hug data) 2-which visualization or chart more suitable to show this abnormal condition daily? this is huge log file and it is difficult to show all data for each day on single chart. Any idea?  Thanks,  
Hi Team, I want to know if it is possible pass data present in a format block of one playbook to another playbook being called. So its like PB1--->Format block -------> PB2 called ------>PB2 perfor... See more...
Hi Team, I want to know if it is possible pass data present in a format block of one playbook to another playbook being called. So its like PB1--->Format block -------> PB2 called ------>PB2 performs functions on the previous format block data. I know as a workaround this can be done by adding an artifact and using it in subplaybook. But would prefer if it is possible without it. Kindly let me know if any further info is required. Thanks in advance! Regards, Shaquib
HI, We see inconsistency in the value of pctIlde time captured in the actual linux machine and in our splunk_ta_nix app. In our Splunk_TA_NIX APP the values pctIdle is different. But in the user en... See more...
HI, We see inconsistency in the value of pctIlde time captured in the actual linux machine and in our splunk_ta_nix app. In our Splunk_TA_NIX APP the values pctIdle is different. But in the user end Linux Machine, when checked using SAR command the values are not matching 5.2.0 is the version of SPLUNK_TA_NIX APP. How do we fix this issue? Any suggestions Please
Hello all, I haven't used rex many times. I have a URL like this, http;s://ab-abcd.in.xyz.com/abcd_xyz/job/example_name/" . Here ab-abcd.in.xyz.com is server name, abcd_xyz is project name and exa... See more...
Hello all, I haven't used rex many times. I have a URL like this, http;s://ab-abcd.in.xyz.com/abcd_xyz/job/example_name/" . Here ab-abcd.in.xyz.com is server name, abcd_xyz is project name and example_name is task name. I have to extract these using regex. I have tried using this query, |rex field=URL "https:\//(?<server_name>\w*)/\(?<project_name>\w*)\/job\/(?<task_name>\w*)\/"| table server_name project_name task_name I know that this query is wrong.But confused on how to correct it.  Can anyone help me to correct this query.
I have an api which has a number of endpoint, e.g., /health, /version, /specification and so on... I have a query which extracts the response time from logs and creates stats as duration. Here is th... See more...
I have an api which has a number of endpoint, e.g., /health, /version, /specification and so on... I have a query which extracts the response time from logs and creates stats as duration. Here is the query - index=*my_index* namespace="my_namespace" "Logged Request" | rex field=log "OK:(?<duration>\d+)"| stats avg(duration) as Response_Time by log Here's an example of generated log - " 2021-08-23 20:36:15.627 [INFO ] [-19069] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST):http://host/manager/resource_identifier/ids/getOrCreate/bulk?streaming=true&dscid=52wu9o-b5bf6995-38c5-4472-90d7-2f3edb780276:200 OK:2622 " The query that I've shared extracts the duration from every log. I need to find a way to extract the name of the endpoint and calculate the mean of duration of each endpoint? Can some one help me with this, I am really new to splunk.
I am displaying a line chart and the tool tip text only showing Y axis field. I want to customize the tooltip text that is chart overlay value to third field. Below is my search query  sourcetype="<... See more...
I am displaying a line chart and the tool tip text only showing Y axis field. I want to customize the tooltip text that is chart overlay value to third field. Below is my search query  sourcetype="<website Monitoring>" title="url1" | eval status=case(response_code == 200, "Online",response_code == "", "Offline", response_code ==404, "Not found", response_code == 500, "Internal Server Error") | timechart count by status | eval clust=case(title == "url1", "admin",title == "url2", "Cluster 1") | fields _time clust I want url to be displayed in the tooltip text (field=clust) Please provide your expertise on how to achieve this. 
Hi Folks,    I want to check at what time url has been brought up. Url already added in website monitoring. For example if the url was down at 12 PM and it has been brought up at 1 AM this dashboar... See more...
Hi Folks,    I want to check at what time url has been brought up. Url already added in website monitoring. For example if the url was down at 12 PM and it has been brought up at 1 AM this dashboard panel should indicate 1 PM url went up. I want to monitor multiple urls for this scenario. Appreciate your expertise advise. 
Don't know why there is not a location for "SignalFx" related questions. According to SignalFlow API doc: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-start-signalfl... See more...
Don't know why there is not a location for "SignalFx" related questions. According to SignalFlow API doc: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-start-signalflow-computation, there is a "start" parameter and a "stop" parameter. And I'm totally confused by the explanation: "The date and time that the computation should start/stop, in *nix time" Let's say current time is "2021-09-22 20:27:00" (America/Los_Angeles) What if I chose a time range in the past: what will happen if the start time equals the stop time? Am I supposed to get nothing cause the computation stops immediately? And How to understand it that the computation should start/stop yesterday? start:  2021-09-21 00:00:00 stop:  2021-09-21 00:00:00 what will happen if the stop time is greater than the start time?  Am I supposed to wait for 2 hours for a computation start/stop yesterday? start:  2021-09-21 00:00:00 stop:  2021-09-22 02:00:00 And what if I chose a time range in the future? what will happen if the start time equals the stop time? Am I supposed to get nothing cause the computation stops immediately? And am I supposed to wait until "2021-09-23 00:00:00" for the computation to start? start:  2021-09-23 00:00:00 stop:  2021-09-23 00:00:00 what will happen if the stop time is greater than the start time?  Am I supposed to wait for 2 hours for a computation start/stop in the future? start:  2021-09-23 00:00:00 stop:  2021-09-23 02:00:00
Hi, This seems super dumb, but I've been fiddling with this for an embarrassingly long time now. It's been a couple of years since I've written any sub-searches. I'm attempting to project data ... See more...
Hi, This seems super dumb, but I've been fiddling with this for an embarrassingly long time now. It's been a couple of years since I've written any sub-searches. I'm attempting to project data from the subqueries into a summary table (all from the same root search results) This is running on splunk cloud under a trial license. See dumbed down queries belong. Happily returns a result:     index=xxx | search index=xxx admintom | stats count as x | table x | table x     Format returns nothing (`format` shows `NOT()`)     index=xxx [ search index=xxx admintom | stats count as x | table x ] | table x      
Hey guys,    
I am using Splunk Add-on for Amazon Web Services to ingest json.gz files from an s3 bucket to Splunk. However Splunk is not unzipping the .gz file to parse the json content.  Is there something I sh... See more...
I am using Splunk Add-on for Amazon Web Services to ingest json.gz files from an s3 bucket to Splunk. However Splunk is not unzipping the .gz file to parse the json content.  Is there something I should do for the unzipping to happen?
I had a EC2 syslog client and a MacOS which installed the Splunk Enterprise. I want my Splunk Enterprise to be my syslog server. In that way, I should configure my syslog client to transfer syslog to... See more...
I had a EC2 syslog client and a MacOS which installed the Splunk Enterprise. I want my Splunk Enterprise to be my syslog server. In that way, I should configure my syslog client to transfer syslog to Splunk Enterprise and nothing syslog server configure stuff need I make. On Splunk server I created a UDP Data Input. I also exposed 514 port and specified an Index for this Data Input. I set SourceType as 'syslog'. On syslog client side, I configured the destination to be *.* <Splunk Enterprise IP>:514 in its rsyslog.conf file. I tried to use logger to generate syslog on my client side e.g. logger -p local0.crit "...", but there was no event showing up in my index when I did the search. Basically, in my understanding the Splunk Enterprise Server can function as a syslog server which can receive message from syslog clients.  (Screenshot is from:  https://www.youtube.com/watch?v=BQU-bsSCXhk) Is there any step I did incorrect or do I miss any step?     
Hello, I am having an issue where dashboard studio has ceased to export background images (loaded on to the canvas) with PDF and PNG download.  When I download reports (via the download link in the ... See more...
Hello, I am having an issue where dashboard studio has ceased to export background images (loaded on to the canvas) with PDF and PNG download.  When I download reports (via the download link in the upper right corner) in PDF/PNG format the canvas background image is gone and only the background color of the dashboard that is set exports out.  I am currently using a canvas background image and it is gone on the export of every dashboard -- this is quite problematic.  Does anyone have a solution on how to fix this? Thank you. Edit:  I figured this may be a KVStore linking issue, so I uploaded the image to the server and linked to it rather than uploading it.  The PDF export still does not contain the canvas background image. I also thought this may be a size limitation, however the file is around 1MB in size total.  This appears to not be the case also.
Hi there everyone, Today i'm trying to update my Enterprise Console to the latest version, right now i'm using the AppDynamics Version 20.7.0-22903. Downloaded the Enterprise Console 21.4.6 th... See more...
Hi there everyone, Today i'm trying to update my Enterprise Console to the latest version, right now i'm using the AppDynamics Version 20.7.0-22903. Downloaded the Enterprise Console 21.4.6 that is the latest version avaliable from my management interface Following all steps i got stuck in this action: The the database is running without errors, i can access the controller host database at the console host through bash without problems. Even with everything working everytime i try to update or execute the platform-setup-x64-linux-21.4.6.24635.sh script get this message. Is there something more that i can debug to find out what is going on? Thank you.
I am using splunk cloud 8.2 version and trying to collect logs from Domain controllers by installing UF's on them. I have Deployment server set up to manage those Uf's and here i have the doubt. In... See more...
I am using splunk cloud 8.2 version and trying to collect logs from Domain controllers by installing UF's on them. I have Deployment server set up to manage those Uf's and here i have the doubt. Inorder to connect Uf's with splunk cloud instance we need to install the UF credential package on each of the UF's. So can i install it with the help of Deployment server? so that i dont have to install it manually by going to each of the UF's.  I would really appreciate if someone can guide me in this. Thanks in advance !
Hi, I am try to get the most recent value and search for specific status item itemdesc _time status ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM01 COKE 2021-09-20 13:00:15 FAILED ITEM02 COKE 2021... See more...
Hi, I am try to get the most recent value and search for specific status item itemdesc _time status ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM01 COKE 2021-09-20 13:00:15 FAILED ITEM02 COKE 2021-09-21 21:00:12 PASSED ITEM02 COKE 2021-09-21 20:00:05 PASSED ITEM02 COKE 2021-09-21 19:00:05 FAILED ITEM03 COKE 2021-09-20 12:00:05 FAILED ITEM03 COKE 2021-09-19 11:00:15 PASSED Need to check most recent status by item, and pull only if status = Failed O/p ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM03 COKE 2021-09-20 12:00:05 FAILED In this case ITEM02 is ignored since most recent status is PASSED  
I need to see if the default encryption between Splunk components be checked via GUI? Am talking about the SSL encryption. Also please help with ensuring delivery of data from FWs to Indexers is ackn... See more...
I need to see if the default encryption between Splunk components be checked via GUI? Am talking about the SSL encryption. Also please help with ensuring delivery of data from FWs to Indexers is acknowledged?