All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOME... See more...
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOMER ID], a.[NAME]    
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wron... See more...
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wrong. Thank you. I also tried to copy the REST URL Path from the metrics browser: https://<>.saas.appdynamics.com/controller/rest/applications/newhello/metric-data?metric-path=Overall%20Application%20Performance%7CCalls%20per%20Minute&time-range-type=BEFORE_NOW&duration-in-mins=720 And the return shows: <body> <h1>HTTP Status 400 - Invalid application id newhello is specified</h1> <hr /> <p><b>type</b> Status report</p> <p><b>message</b>Invalid application id newhello is specified</p> <p><b>description</b>The request sent by the client was syntactically incorrect.</p> <hr /> <h3>AppDynamics</h3> </body>   ^ Post edited by @Ryan.Paredez for minor changes to the title to make it more searchable
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2... See more...
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12907 dequeueCount=15466 2022/06/14 08:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12850 dequeueCount=15409 looking to do subtraction of dequeueCount (15466-15409) and setup an alert if it equals to 0 Thanks
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> ... See more...
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> WHERE <wherever>;" | search | table X, Y, Z | sort - Z        My query appears to work either way, but I have often been critical about using extra 'search' commands within a query, so I am curious as to whether there is a performance hit.
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get an... See more...
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get any data in Splunk, by running kubectl logs on my pods i have the following error : Why is the connection refused ? Is there something to do with my indexes ? Thanks,
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation don... See more...
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation done. If anyone has a clue on how to configure AppD monitoring on SAP HANA, request you to share the procedure. Thanks in advance. ~Azhar
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count... See more...
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) | sort - count | transpose header_field=_time When I set this one as an alert, it considers the client_isp and _time as part of the query so even there where no result it is sending a blank alert only the client_isp and time on the first column. New Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | transpose header_field=_time | sort - count | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) While on this one, there were no result at all. What maybe wrong on this query?
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: no... See more...
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: not found").. i fixed it by using 22.2 version for machine-agent... 
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the... See more...
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the Stream sensor already.  2) we switched from the disk buffering to memory buffering for the syslog receiver But the same issue   Any idea please Cheers Auguste  
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory... See more...
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory where created_at > Date_sub(Curdate(), INTERVAL 7 Day) and created_at < Curdate() group by DATE_FORMAT(created_at,\"%m/%d/%y %W\")" It gives me the per-day count of files received in last 7 days along with the date. The result is as follows. date                                              file_count 06/07/22 Tuesday                79 06/08/22 Wednesday         46 06/09/22 Thursday              57 06/10/22 Friday                     5 06/11/22 Saturday               5 06/12/22 Sunday                  227 06/13/22 Monday                187 I want to calculate the running averages of file_counts for all these days.  For e.g. for 1st day, running average is 79/1 = 79 for 2nd day, running average is 79+46/2 = 62.5 for 3rd day, running average is 79+46+57/3 = 60.67 and so on. For this I want to write a query. Please help me with this.  
Does each search head cluster need it's own dedicated Deployer server. For example if we have a three server search head cluster called Search Head Cluster A with a Deployer server can we use that sa... See more...
Does each search head cluster need it's own dedicated Deployer server. For example if we have a three server search head cluster called Search Head Cluster A with a Deployer server can we use that same Deployer server for another three server cluster, say Search Head Cluster B
My predecessor installed a Search Head cluster and used the Index Cluster Master as the Deployer server for the Search Head cluster, I would like to move that role to a separate machine, can that be ... See more...
My predecessor installed a Search Head cluster and used the Index Cluster Master as the Deployer server for the Search Head cluster, I would like to move that role to a separate machine, can that be done without having to rebuild the Search Head cluster?
if I have a retention at an index of 90days, but events come in with a broken timestamp that says 2018 or something; what counts for retention now? Indexed time or _time from the event?
Hello,   Is there a way to use transaction command to let us know if an activity/attack is ongoing ? Scenario :  Create a search that detects ongoing DDOS activity I have the following search that ... See more...
Hello,   Is there a way to use transaction command to let us know if an activity/attack is ongoing ? Scenario :  Create a search that detects ongoing DDOS activity I have the following search that will detect DOS activity events and track them using transaction.  I see there is a maxspan option available but there is no minspan .   Even if i schedule this to run every 1h, the maxspan will show those results that are less than 1h too.  Since there is no minspan option, how to make it detect an ongoing activity ?  Hope i am clear My search:   index=arbor ... | transaction eventID startswith=starting endswith=end maxspan=1h | eval starttime = _time | eval duration = "Ongoing" | convert ctime(starttime) | table starttime, duration, condition   Maybe my above approach is wrong. How else can we accomplish this?  
I have a real time Splunk index pushing records into two source types. Source type 1 holds fields including assignmentgroup, manager name , entity etc. Source type 2 hold fields including ticketnumbe... See more...
I have a real time Splunk index pushing records into two source types. Source type 1 holds fields including assignmentgroup, manager name , entity etc. Source type 2 hold fields including ticketnumber , assignmentgroup,priority etc. Sourcetype 2 has tickets updates coming in and each ticket can move from one assignmentgroup to another assignmentgroup which may or may not be present in Source type 1 I would like to find out how many tickets are there in Sourcetype 2 which moved out from assignmentgroups of Sourcetype 1? In other words, how many tickets are present in Sourcetype 2 whose assignmentgroup doesnt belong to the assignmentgroup present in Source type 1.  Any leads would be helpful. TIA! Just an update, this sourcetype 1 is actually pushed to a lookup file (that has same collumns as in Source type 1, Hence , I intend to use this lookup in the search query)
Hello Experts, I am using Splunk Dashboard Studio with Splunk Enterprise version 8.6.2. I have a simple table in my dashboard showing some search results,  no scrollbars seen in the dashboard its... See more...
Hello Experts, I am using Splunk Dashboard Studio with Splunk Enterprise version 8.6.2. I have a simple table in my dashboard showing some search results,  no scrollbars seen in the dashboard itself, works fine. When I Download the dashboard as PDF ( or PNG ), the table has a horizontal scroll bar even though the data displayed is small enough to fit in to the table. I have tried using "Overflow" : "None" , overflow-x :None, fontsize properties in the code editor, the PDF export still has the scrollbar. Attaching some screenshot dashboard screenshot with no horizontal scroll bar Same dashboard after using download as PDF Any ideas on how to remove this ? Appreciate your help in advance. Regards
Hi All, I have a fresh installed splunk with splunk ITSI running app that I will use for Proof of Concept in the customer, so in this case i used Sales Trial license for Splunk Enterprise and Splun... See more...
Hi All, I have a fresh installed splunk with splunk ITSI running app that I will use for Proof of Concept in the customer, so in this case i used Sales Trial license for Splunk Enterprise and Splunk IT Service Intelligence Trial license for splunk ITSI. For the splunk enterprise side there is no issue I can ingest the data needed. Now, the case is when I choose the ITSI app, the first dashboard show me like this picture below with notification Unable to retrieve subscription data I don't know what subscription data means. And when i tried to choose another menu on splunk ITSI app, I got internal Server Error - 500 response code   I want to know if anybody here have experienced the same thing as me, please tell me how you can fix it. Thanks      
What are the best HEC perf tuning configs?
Trying to get the SSL monitoring Extension working and I am seeing this error: unable to load certificate 140343873517384:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:703:Expect... See more...
Trying to get the SSL monitoring Extension working and I am seeing this error: unable to load certificate 140343873517384:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:703:Expecting: TRUSTED CERTIFICATE I am installing on Linux  Thanks
Hi everyone, I would like to know if it is possible to export the alerts created in the splunk cloud instance. I want to export the queries for each alert at once. It's possible? Regards