All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation s... See more...
Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation should specifically be applied just to this specific dashboard.   Any idea?
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThre... See more...
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThread] - Script execution failed for external search command 'runshellscript'. 06-14-2022 16:41:00.906 +0300 WARN SearchScheduler [2047436 SchedulerThread] - addRequiredFields: SearchProcessorException is ignored, sid=AlertActionsRequredFields_1655214060.1451, error=Error in 'script': Script execution failed for external search command 'runshellscript'. The above comes to the logs regardless of whether the alert has been fired or not, and we rely quite heavily on running external scripts to make external systems aware of problems. I thought, now all our script bindings to alerts are broken and we must do a rollback. However, I tested and the scripts were executed nicely. My question is, what has changed here, if anything? I would like to get rid of those messages cluttering the logs in vain. An the other things is, if something else really has changes, what should I do to make splunk happy about the scripts in alerts? I am looking for something else than "Please write a Python script to do the job." Any clues?
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable t... See more...
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable to find a solution.   Does anyone know how we can restrict write-access to indexes to the sc_admin role and allow read access for everyone else?
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the ... See more...
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the following issue: I know that events starts with two kind of character sequences: [#| Date in format month (3 letters) and day, so for example Jun 07 So, in BREAK_ONLY_BEFORE, i putted the following regex:      [\[\#\|] | [\w{3}\s\d{2}]      and it works fine. A problem rise in the second case: when this events are present, they have a structure with many carriage return. Here a log sample:     Jun 07, 2022 8:29:52 PM <some_path_here>info INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m <other lines that starts always with - symbol>      In such case, the default event line breaking split every info in this events in a different events. So, I set      SHOULD_LINEMERGE=1     but I have still problems; even with this configuration, the events are not properly merged. What I got are 3 different events splitted in such a way::     Jun 07, 2022 8:29:52 PM <some_path_here>info     first part of info starting with - symbol, so:     INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m     remaining part of info starting with - symbol, so:     -Djavax.net.<remaining path> -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=<value> -DANTLR_USE_DIRECT_CLASS_LOADING=<value>     To fix this, I tried to use:     MUST_NOT_BREAK_AFTER=[\r\n]+     but it does not work. The event is still divided in the above 3 different parts. How can I fix it?
Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t ... See more...
Hi guys. I´m trying to use the configuration item field on Service Now integration in order to pass to SNOW a dinamic field, so I´m trying to use something like this $System_name.result$ but, didn´t work. Do you know if this field accept this format ? Thank you. Clecimar
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subs... See more...
Looking to improve your security posture and address our June 2022 security advisories? You have come to the right place, and here are some helpful resources for you! Product Security Page - Subscribe to get notified of all recent advisories  Improve Your Security Posture Tech Talk - Technical webinar focusing on our 9.0 security features and June 2022 security advisories Splunk Enterprise Upgrade Best Practices - Lantern page with general tips for upgrades Customer FAQ - Common questions on our recent security posture initiative  Documentation - all the juicy details on how to take action  Still have questions? *If related to these advisories, you can comment below! *If related to securing your Splunk instance, you can post a new question on this board! *If specific to Splunk Enterprise or Splunk Cloud Platform, you can post to those boards!
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10... See more...
Hi, Say I have the following table: Name  2022-06-07 10:01:14 2022-06-07 22:01:13 2022-06-08 10:01:11 2022-06-08 22:01:25 2022-06-09 10:01:22 2022-06-09 22:00:59 2022-06-10 10:01:28 a 301 300 302 303 301 400 412 b 200 220 235 238 208 300 302 Can I color a cell based on the increment rate from the previous value? for instance- if the value increased by 10%, it will be yellow, 20% would be Orange and so on.  I'm looking for a solution based on simple xml where no additional files are needed. Thanks.
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and ... See more...
Hi Community,   I have a dashboard that gives me an overview of the details. When I click on one of the rows it drives me to a different dashboard which takes time from this dashboard and performs a granular search within time limits based on parent ID. This search performs a search on a panel that shows no data at all. When I try to look at the SPL of the empty dashboard, I realise that the SPL does search on milliseconds. This search is within the range of 1 second. This search is driven by a data model acceleration which can accelerate only for seconds. So if the change the time range for more than a second I get the desired results.   To fix this issue, the only option I can think of is reconstructing the SPL without data models but that will slow down the search or manipulate the time range so that I can get the data. Is there some other option which I can use to get the desired results? Thanks in advance.   Regards, Pravin      
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of ... See more...
Hello, First of all, sorry for my lack of knowledge if my question looks silly.   I have a datasource providing events as follows : State Start Timestamp / UserName / StateName / Duration of the State in seconds / State End Timestamp   I'm trying to produce a timechart that is showing the duration in each state for each user with a 1h span so that we could see clearly the time spent in each state by the users for each hour of the day.   The issue is that a user can start a state at a given time and have a duration bigger than 1h.   For exemple, a user logs in and is available at 8:32 and it stays in "available" state during 2h.   What I get so far with a basic timechart span=1h of the states by user : 2h in 8h span nothing in 9h span nothing in 10h span   I would need to manipulate the query or the events in a way that will make the timechart report in this example : 28 min in 8h span 1 hour in 9h span 32 min in 10h span   as the state lasted between 8h32 and 10h32.   Here's my query today :   | eval AvailableDuration = if(State="Available",Duration,0) | eval BusyDuration = if(State="Busy",Duration,0) | eval CallDuration = if(State="In Call",Duration,0) | timechart span=1h fixedrange=false useother=f limit=0 sum(CallDuration) as "In call" sum(AvailableDuration) as "available" sum(BusyDuration) as "Busy" by UserName     Is there a way to redistribute the durations by manipulating data so that each hourly span is properly populated ?   Thanks in advance for your help !
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 45... See more...
Hi, Say I have this table: Name Date Flows a 2022-06-13 23:01:26 200 a 2022-06-13 10:01:26 301 b 2022-06-13 23:01:26 504 b 2022-06-13 10:01:26 454   I'd like to create a table that's using the values of "Date" column as a new columns, and grouping all the identical "Name" values into one line as follows (where the values are "Flows"): Name 2022-06-13 23:01:26 2022-06-13 10:01:26 a 200 301 b 504 454   I tried several approaches but failed. Could you assist?
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOME... See more...
Need a similar query for Splunk.   SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT] FROM RES_DATA a INNER JOIN INV_DATA b ON a.[CUSTOMER ID]=b.[CUSTOMER ID] GROUP BY a.[CUSTOMER ID], a.[NAME]    
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wron... See more...
I have monitored 2 applications (one Java app and one node.js app) in Appdynamics, but when trying to query the applications, it returned NO applications. Appreciate your advice on what might be wrong. Thank you. I also tried to copy the REST URL Path from the metrics browser: https://<>.saas.appdynamics.com/controller/rest/applications/newhello/metric-data?metric-path=Overall%20Application%20Performance%7CCalls%20per%20Minute&time-range-type=BEFORE_NOW&duration-in-mins=720 And the return shows: <body> <h1>HTTP Status 400 - Invalid application id newhello is specified</h1> <hr /> <p><b>type</b> Status report</p> <p><b>message</b>Invalid application id newhello is specified</p> <p><b>description</b>The request sent by the client was syntactically incorrect.</p> <hr /> <h3>AppDynamics</h3> </body>   ^ Post edited by @Ryan.Paredez for minor changes to the title to make it more searchable
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2... See more...
How can we subtract dequeue count of now with an hour before dequeue count number to monitor queues are progressing? Log lines are like these: 2022/06/14 09:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12907 dequeueCount=15466 2022/06/14 08:19:02 queue=ABC size=0 consumerCount=2 enqueueCount=12850 dequeueCount=15409 looking to do subtraction of dequeueCount (15466-15409) and setup an alert if it equals to 0 Thanks
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> ... See more...
When using DBXQUERY, is the a |search needed after the query?       | dbxquery connection="DRTP-Connection" query=" SELECT <X, Y, Z> FROM <wherever> WHERE <wherever>;" | search | table X, Y, Z | sort - Z        My query appears to work either way, but I have often been critical about using extra 'search' commands within a query, so I am curious as to whether there is a performance hit.
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get an... See more...
Hello everyone, I'm trying to connect my cluster pods logs to splunk with collectorforkubernetes when i lauch the yml config file for the collector pods seems to be running well but i don't get any data in Splunk, by running kubectl logs on my pods i have the following error : Why is the connection refused ? Is there something to do with my indexes ? Thanks,
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation don... See more...
Hi, We are looking to install the Appdynamics on a stand-alone SAP Hana database that is not attached to an SAP ECC system. I'm looking for proper step by step procedure to get this installation done. If anyone has a clue on how to configure AppD monitoring on SAP HANA, request you to share the procedure. Thanks in advance. ~Azhar
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count... See more...
OLD Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) | sort - count | transpose header_field=_time When I set this one as an alert, it considers the client_isp and _time as part of the query so even there where no result it is sending a blank alert only the client_isp and time on the first column. New Query: source="http:Emerson_P1CDN" AND status_code=200 AND path=*/catalog* AND path!=*thumb* AND path!=*CartRefreshStatusJSON* AND path!=*PriceAjaxView* | bucket span=1m _time | stats count by client_isp,_time | transpose header_field=_time | sort - count | where count >= 600 | convert timeformat="%m/%d/%Y %H:%M:%S" ctime(_time) While on this one, there were no result at all. What maybe wrong on this query?
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: no... See more...
Hello, I want to notify that some bug in version 22.6 , on controller can get information of CPU, RAM,... from AIX server.  Maybe something Script is wrong( same in the below pictute " ksh run_co: not found").. i fixed it by using 22.2 version for machine-agent... 
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the... See more...
Hi All Any help or suggestion? we are confronted with this problem with does not stop. we have already tried to do the following but without result: 1) We have optimized the  performance on the Stream sensor already.  2) we switched from the disk buffering to memory buffering for the syslog receiver But the same issue   Any idea please Cheers Auguste  
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory... See more...
Hi, I have following splunk query. | dbxquery connection="FFconed_feTenant" query="select count(file_name) as file_count, DATE_FORMAT(created_at,\"%m/%d/%y %W\") as date from ida_files_inventory where created_at > Date_sub(Curdate(), INTERVAL 7 Day) and created_at < Curdate() group by DATE_FORMAT(created_at,\"%m/%d/%y %W\")" It gives me the per-day count of files received in last 7 days along with the date. The result is as follows. date                                              file_count 06/07/22 Tuesday                79 06/08/22 Wednesday         46 06/09/22 Thursday              57 06/10/22 Friday                     5 06/11/22 Saturday               5 06/12/22 Sunday                  227 06/13/22 Monday                187 I want to calculate the running averages of file_counts for all these days.  For e.g. for 1st day, running average is 79/1 = 79 for 2nd day, running average is 79+46/2 = 62.5 for 3rd day, running average is 79+46+57/3 = 60.67 and so on. For this I want to write a query. Please help me with this.